PlayStation Y2K-Like Battery Bug About To Become A Problem As Sony Shuts Down Check In Servers

from the tick-tock dept

We’ve had a couple of discussions now about video game preservation with the impetus being Sony’s shutdown of support for the PlayStation Store for PSP, PS3, and Vita owners. The general idea there was questioning what happens to games for those systems in the very long term if suddenly nobody can get to them anymore and the developers and publishers are not always retaining the source code and assets for these games on their end. That sort of thing is probably primarily of interest to us folks who look at these games as a form of art and culture, very much worth preserving.

But Sony may well have a much bigger issue on its hands. As a result of a strange internal time-check issue that exists on PS3 and PS4 consoles, there is the very real possibility that those consoles will be unable to play any purchased game soon if the end user replaces the battery on the device. It’s, well, it’s a bit like Y2K, but for real.

The root of the coming issue has to do with the CMOS battery inside every PS3 and PS4, which the systems use to keep track of the current time (even when they’re unplugged). If that battery dies or is removed for any reason, it raises an internal flag in the system’s firmware indicating the clock may be out of sync with reality.

After that flag is raised, the system in question has to check in with PSN the next time it needs to confirm the correct time. On the PS3, this online check happens when you play a game downloaded from the PlayStation Store. On the PS4, this also happens when you try to play retail games installed from a disc. This check has to be performed at least once even if the CMOS battery is replaced with a fresh one so the system can reconfirm clock consistency.

But if support for PSN goes away on these systems, so does the system’s ability to check in to reconfirm the correct time. And if that happens, well, then suddenly any PS4 game will no longer be playable, nor will any PS3 game bought as a digital download. Sony, in other words, can essentially render these consoles mostly or totally useless for playing games just by shutting down PSN support for these consoles.

Now, why did Sony create this problem for itself in the first place? Well, the answer is different for each console. On the PS3, it was used to enforce “time limits” on digital downloads. For the PS4, it appears to have been used more to keep gamers from messing with how trophies are shown, specifically for when they were earned. Either way, neither of those is so important at this point that Sony should risk bricking bought consoles as a result.

Interestingly, the fix for this should be a simple firmware update… except that Sony hasn’t said a word about whether one is coming.

Sony could render the problem moot relatively easily with a firmware update that limits the system functions tied to this timing check. Thus far, though, Sony hasn’t publicly indicated it has any such plans and hasn’t responded to multiple requests for comment from Ars Technica. Until it does, complicated workarounds that make use of jailbroken firmware are the only option for ensuring that aging PlayStation hardware will remain fully usable well into the future.

I can’t imagine a single reason why Sony would want this looming customer crises on its hands… unless it’s part of a plan to push the public to buy more, new-generation consoles and get their games back from there. If that is indeed the plan, the PR fallout is going to be insane.

Filed Under: , , , , ,
Companies: sony

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “PlayStation Y2K-Like Battery Bug About To Become A Problem As Sony Shuts Down Check In Servers”

Subscribe: RSS Leave a comment
75 Comments
This comment has been deemed insightful by the community.
That One Guy (profile) says:

'If you won't do it others will'

Until it does, complicated workarounds that make use of jailbroken firmware are the only option for ensuring that aging PlayStation hardware will remain fully usable well into the future.

And once more the pirates will have the better version of a product, with the only way paying customers will be able to access games they’d paid for being jailbreaking the consoles they’d bought.

Never mind the PR black-eye from leaving in place a problem that will brick entire consoles and game libraries if they ignore this they’ll likely drive a good number of customers straight into the arms of their competitors(Microsoft has got to be salivating over this, as knowledge that Sony is willing to allow their consoles to be bricked through inaction is not what you want customers thinking about during a new console launch), and those that stick around are going to be a lot more willing to entertain the idea of treating Sony’s rules and property with the same disdain as Sony is showing them.

This comment has been deemed insightful by the community.
GHB (profile) says:

This may be sony's planned obsolescence

unless it’s part of a plan to push the public to buy more, new-generation consoles and get their games back from there

That is planned obsolescence

Sony could repeat this cycle of shutting down support for older consoles and releasing new PlayStation consoles, effectively turning your games into a subscription if you want to keep playing them. This is awful ever since Adobe went cloud-only (or any form of expiring licenses instead of perpetual (a subscription are technically an expiring license that gets renewed every time you pay)).

Even worse the fact that the consoles can increasingly get more expensive over time, on top of potentially having to repurchase games (and the more games you want to preserve that, the more likely you have to spend in doing so).

And just like many other awful publishers, both the consumer and the workers (game developer, music artists, writers, etc.) gets screwed.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: This may be sony's planned obsolescence

I doubt it’s planned. It’s more likely that nobody considered the long-term effects of the CMOS battery, which is by itself such a basic and common piece of motherboard tech they probably didn’t think about it at all.

"Even worse the fact that the consoles can increasingly get more expensive over time"

Can, but usually don’t. The launch price of the basic PS3 was the same as the launch price of the PS5 with with optical drive. One of the things that is agreed to have killed the XBox One’s launch chances was the fact that mandating the Kinect made it more expensive. With inflation, R&D costs and the tendency for console manufacturers to sell hardware at a loss in order to recoup through sales and services, I doubt Sony are going to make a lot of money by forcing people to upgrade in such a way.

Scary Devil Monastery (profile) says:

Re: Re: This may be sony's planned obsolescence

"I doubt it’s planned. It’s more likely that nobody considered the long-term effects of the CMOS battery, which is by itself such a basic and common piece of motherboard tech they probably didn’t think about it at all."

You forgot to apply Sony’s Razor; "Always attribute to malice and greed what would be adequately explained by incompetence".

PaulT (profile) says:

Re: Re: Re: This may be sony's planned obsolescence

I wouldn’t put it past them for sure. But, having worked with a lot of developers, this seems to be something that they overlooked, or that they simply don’t think about how things will operate 10 years in the future, than it does some evil guy twirling his moustache working out how they can screw over people after 2 generations.

Scary Devil Monastery (profile) says:

Re: Re: Re:2 This may be sony's planned obsolescence

I’d normally agree.

But it’s Sony we’re talking about. For that company the hypothesis of the moustache-twirling parody villain makes more sense than a quick "oops" by their devs.

They haven’t changed significantly since the days they thought it wise to include rootkit malware with people’s music purchases.

Benefit of doubt is a good thing. Where Sony is concerned, however, such doubt has proven itself synonymous with naívety a few times too many for me to now give much credence to the idea that they are merely inept in harmful ways.

PaulT (profile) says:

Re: Re: Re:3 This may be sony's planned obsolescence

"They haven’t changed significantly since the days they thought it wise to include rootkit malware with people’s music purchases."

Well, that’s kind of my point. The rootkit fiasco was a company scrambling to "fix" an immediate problem. That takes a different mindset to deliberately inserting a problem that will cause problems 2 generations of hardware down the road.

Sony probably don’t deserve benefit of a doubt, but having worked in such corporations there’s many explanations other than some grand plan. Indeed, Sony have already walked back on most of the server shutdowns (for now) due to the backlash to this very issue.

Rekrul says:

Re: Re: This may be sony's planned obsolescence

I doubt it’s planned. It’s more likely that nobody considered the long-term effects of the CMOS battery, which is by itself such a basic and common piece of motherboard tech they probably didn’t think about it at all.

Yeah, it’s not like they had any examples to draw from. I mean, CMOS batteries have only been used in computer motherboards for a little over 30 years now and having to change the battery is a completely unheard of occurrence. Certainly it’s nothing that a hardware manufacturer could ever be expected to know about…

PaulT (profile) says:

Re: Re: Re: This may be sony's planned obsolescence

"Certainly it’s nothing that a hardware manufacturer could ever be expected to know about…"

The hardware manufacturers aren’t the problem here. The issue is how the software is dealing with perfectly natural hardware issues. CMOS batteries can be changed at will without a problem other than their awkward placing in the case. The problem is that the licensing for software is not designed to account for that once the servers are no longer available.

If you understand anything about the way software developers work in large corporations, you will understand that what’s obvious to one department is something the other department never consider.

Scary Devil Monastery (profile) says:

Re: Re: Re:2 This may be sony's planned obsolescence

"If you understand anything about the way software developers work in large corporations, you will understand that what’s obvious to one department is something the other department never consider."

All of which is true. It’s equally true that normal firmware is built to take the hardware into consideration, and although many might forget the effects of a power-out on a motherboard I’m pretty damn sure no one who writes motherboard software is going to not have that part in mind.

In fact, that the motherboard can reset itself by phoning home suggests that the firmware programmers were quite clear that CMOS battery failure would happen and wrote specific code to ensure retained control.

Hence my conclusion is "Broken By Design" – as every other customer-screwing issue surrounding Sony products tends to be.

Rekrul says:

Re: Re: Re:2 This may be sony's planned obsolescence

The hardware manufacturers aren’t the problem here. The issue is how the software is dealing with perfectly natural hardware issues. CMOS batteries can be changed at will without a problem other than their awkward placing in the case. The problem is that the licensing for software is not designed to account for that once the servers are no longer available.

And yet they knew that the CMOS battery, which would eventually need to be replaced, was powering the part of the system that permitted access to digital content on the PS3 and ALL content on the PS4. They should have also known that Sony wouldn’t support the console forever and that at some point in the future, there would come a time when users could no longer connect to the server to reset it.

I also have to wonder: How many of the digital games that they "sold" have a set expiration date? And if the answer is that only some of them have such a time limit, why does losing the time block access to all digital content rather than just the ones with the time limit? And why would they block access to discs on the PS4? If the clock is out of sync, block the online components. Blocking all digital content and all discs effectively bricks the console.

I also take issue with Sony making the CMOS battery hard to change. Everyone knows it will need to be changed eventually, so make the damn thing easier to get to.

This comment has been deemed insightful by the community.
Bobvious says:

Possible workaround

These two guides show how the different battery connection types are handled for PS3, (Battery on a plug-in cable, and battery in circular holder).

1) PS3 slim, battery in circular holder, https://www.ifixit.com/Guide/PlayStation+3+Slim+PRAM+Battery+Replacement/3237

2) PS3 Fat, battery on a plug-in cable, https://www.sosav.com/guides/game-consoles/sony/sony-home/playstation-3/battery/

This one shows how to replace a PS4 battery https://www.youtube.com/watch?v=BwFDTh5PkI8

As you can see, the issue is that removing or reducing the battery voltage seen by the motherboard is the trigger. A non-trivial possible workaround is to supply a surrogate equivalent voltage source BEFORE you remove the old battery, and keep it in place until you have replaced the old battery.

If you’ve ever had the battery replaced in a modern car, you may have seen how a surrogate "keep-alive" battery is connected via a cigarette socket (or similar) to keep the ECU power alive, while the main battery is replaced. This way the ECU does not see a huge drop in supply voltage and will usually keep its old settings without need a reset protocol. Once the new battery has been fitted and a succesful start of the engine, the keep-alive can be removed.

The same principle could be applied for the Playstation devices. It’s not trivial, and it will require some careful expertise.

Basically we are trying to achieve a way of attaching that external battery BEFORE you disconnect or remove the old one. This will almost certainly involve some careful soldering of the motherboard, and I don’t recommend that your novice user attempt it. Fortunately, almost any decent repair shop should be capable of doing it, and I expect some hobbyists will have the skills to also do it.

(Maybe someone will soon post a video on youtube to do this.)

As you can see from the help pages and the youtube video, there is a lot of case dismantling to even get to the battery in the first place, so it’s not necessarily a lot of extra work to do it.

If I was doing this ( lots of broad handwaving here ), I would fit an extra socket to the motherboard so that a battery on a cable can be plugged in first, then you remove the old battery and replace with the new one, then remove your keep-alive.

Alternatively, with two sockets, you could just swap between the batteries every 5 years or so.

It might be worth doing this if you’ve invested a lot in your PS games.

Anonymous Coward says:

Re: Possible workaround

and I expect some hobbyists will have the skills to also do it.

Almost everyone with electronics as a hobby will likely be capable, as surface mount has become the norm, and the board houses make multi layer boards to hobbyist designs at reasonable prices and turn round.

Bobvious says:

Re: Re: Possible workaround

"Almost everyone with electronics as a hobby will likely be capable"

Unfortunately there seems to be far fewer people with hobbies these days. But in any case, without having a PS3 or PS4 in front of me to see the rest of the PCB and do the mod myself, I didn’t want to suggest that it’s a doddle. The concept is simple enough. People just need to be careful and ensure they don’t short anything out, or make a half-fitted job with a connection that fails unexpectedly.

I expect (hope) someone will eventually make a video showing how to do it.

Anonymous Coward says:

Re: Possible workaround

If I was doing this ( lots of broad handwaving here ), I would fit an extra socket to the motherboard so that a battery on a cable can be plugged in first, then you remove the old battery and replace with the new one, then remove your keep-alive.

Handwaving or not, it’s a problem that shouldn’t exist in the first place.

People payed for the games and they should be able to use them. Period.

The need for a constant server check-in is something the industry came up with as a means to justify their increased prices and egos. After all, it’s not like the media industry ever truly competes with an identical product from a different supplier. Like say the way a grocer sells oranges. If one grocer sells oranges $5.00 and orange while every one else sells them for $1.00, then complains that they are loosing sales. Everyone laughs in that grocer’s face for charging more than the market will bare and expecting record profits. The media industry? They have the magic of copyright to do away with such worries and can dictate the price to their customers. There’s no laughing here for the consumer, only jackasses demanding entitlements for their shitty products, and then blaming their customers when the product doesn’t sell when they want it to.

Am I surprised that the play-stations have a suicide battery in them? No. As others have pointed out, the industry has done this for years with the Arcades. (How many of those arcades are left again?) I’m just disappointed as the industry still hasn’t learned that simple lesson: Make quality products and don’t charge more than what the market will bare. The market doesn’t bare the current prices, that’s why the piracy rates are as strong as ever. Piracy, like all black markets, is created and maintained because the cost of a product is more than what the market deems it to be worth.

This comment has been deemed insightful by the community.
PaulT (profile) says:

"It’s, well, it’s a bit like Y2K, but for real."

This is your regular reminder that Y2K was a very real problem that was fixed by competent professionals in order to avert most of the foreseeable problems, it’s just that it was never the apocalyptic scenario some tabloids dreamed up.

nasch (profile) says:

Re: Re:

This is your regular reminder that Y2K was a very real problem that was fixed by competent professionals

Well, "fixed" in many cases. A lot of systems were updated to interpret two digit years differently. So 00-40, for example, are interpreted as 2000-2040, and 41-99 are 1941-1999. So we get this problem all over again if those systems are still around long enough to start getting dates beyond 2040. Since many have already been around 50 years, it’s not impossible. And at that point, there is no easy fix like there was last time.

Fortunately many systems were actually fixed properly, with four digit dates. So (hopefully) no problem there until the year 10000.

PaulT (profile) says:

Re: Re: Re:

"Well, "fixed" in many cases. A lot of systems were updated to interpret two digit years differently"

Sure, with mainframes. Most terminals and personal computers were simply upgraded, which is what made Windows 95/98/NT so successful. The remaining issues are generally backend systems that couldn’t be easily replaced and they bought time by bringing COBOL people out of retirement to avoid the problem. The fact that the problem was thus avoided does not mean the problem did not exist.

"And at that point, there is no easy fix like there was last time."

The only real fix is to redesign software for the modern era, where the limitations of the 70s/80s no longer exist and other problems not considered at the time exist. Anything beyond that is asking for trouble, not only due to the known problems not being truly fixed, but due to the quickly vanishing number of people able to maintain the legacy systems.

"Fortunately many systems were actually fixed properly, with four digit dates. So (hopefully) no problem there until the year 10000."

I dare say that companies that continue running software written in the 70s past the time when all the people familiar with the languages they were written in have died will have problems long before that date. That they delayed one specific problem to a generation long past their own does not mean they won’t have others.

This comment has been deemed insightful by the community.
Paul B says:

Re: Re: Re: Re:

Hate to say it but we have more versions of the Y2K bug coming.

https://en.wikipedia.org/wiki/Year_2038_problem

Other programs are going to wrap around dates in 2038 (or so) to 1938 as systems could not upgrade to 4 digits and they are still being used today (bank mainframes, C&C software, embedded stuff as well).

I expect fixes to be almost as painful as Y2K or even more so since the people building this stuff wont be found at all.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:2 Re:

"I expect fixes to be almost as painful as Y2K or even more so since the people building this stuff wont be found at all."

I disagree. The problem is easily solved by switching to 64 bit architecture, which most things sold in at least 2 decades before 2038 will have been. We still have 17 years to upgrade and replace affected 32 bit embedded system, and it’s an easy sell to bean counters given the risks of not having the systems replaced in things like banking and other systems.

Sure, there will be some companies that decide to risk having literally nobody left alive to fix a catastrophe (and it will be clear that this means any catastrophe, not just the predicted one). But, once it’s made clear to them that this is the case I’d expect a flurry of activity in 2035 or so to avoid the issues, for the few industries where this is still the case. In the meantime, no company that’s been set up in at least the prior 20 years will be facing a problem.

I know, never underestimate the stupidity of bean counters who would rather risk severe disaster than spend some money to avoid the predictable, but I think the natural movements of nearly 2 decades with no support for original code will drive most to migrate to a system unaffected before it happens.

nasch (profile) says:

Re: Re: Re:3 Re:

The problem is easily solved by switching to 64 bit architecture

The architecture is not the issue. Switching to 64 bit will do nothing to solve the problem of the underlying systems and databases using 2 digits to store years.

it’s an easy sell to bean counters given the risks of not having the systems replaced in things like banking and other systems.

Replacing such a system is never an easy sell. If it’s practical to update the system rather than replace it, that is almost always the correct path.

We still have 17 years to upgrade and replace affected 32 bit embedded system

The bigger issue will be any mainframes still in use.

But, once it’s made clear to them that this is the case I’d expect a flurry of activity in 2035 or so to avoid the issues, for the few industries where this is still the case.

Quite so.

PaulT (profile) says:

Re: Re: Re:4 Re:

"Switching to 64 bit will do nothing to solve the problem of the underlying systems and databases using 2 digits to store years."

True, but 64 bit does remove the epoch bug for systems that use it, and does make it so that virtually nothing made in the couple of decades before the date will encounter it. So, we are just talking about ancient legacy code.

"Replacing such a system is never an easy sell"

It gets easier when you mention that "nobody alive knows how to fix the system if a major catastrophe happens, and by the way I know the next date that will happen". Believe me, I’ve gone through a lot of incompetent managers who would rather pay a fortune to fix a problem tomorrow than pay pennies to fix it today, but most of them will agree to open the purse when you can provide concrete proof of the costs of not doing so.

"The bigger issue will be any mainframes still in use."

I’d argue that hardware support for those systems will be a clearer problem than software before 2038. I might be overly optimistic, but given that any support for those systems is naturally going to get harder (and thus more expensive) in the coming years, there is also going to be a natural move away from the legacy code and hardware before then. The tech we’re talking about was already old hat when I started my career, and I’m in my 40s. While there’s always the "if it’s not broke don’t fix it" attitude, at some point these things will have to be replaced.

nasch (profile) says:

Re: Re: Re:5 Re:

It gets easier when you mention that "nobody alive knows how to fix the system if a major catastrophe happens, and by the way I know the next date that will happen".

Yes, I would expect that is what it’s going to take. Even such a scenario will not be enough to retire 100% of these legacy systems. Now I’m wondering when the last COBOL program will be permanently shut down.

I’d argue that hardware support for those systems will be a clearer problem than software before 2038.

They’ll be moved to a VM if hardware support becomes an issue (if they haven’t been already).

Paul B says:

Re: Re: Re:8 Re:

Mainframes are one of the few systems that are quite poor for VMs. They are built with extreme amounts of data IO, redundancy, and specialized hardware that allows checksums for every calculation. The point of this technology (which is still made and sold today) is that a given calculation will be made correctly even when taking into account things like cosmic rays and other random bit flips.

VMs are often at the other end of the computing spectrum, shared load and trust that the message you get from system A will be accepted by all the other systems down the line. Caught errors are pushed backwards for correction in Error handling instead of being fixed by simply calculating the checksum.

TLDR: Use mainframes for banking or stocks and bonds when you have LOTS of data, calculations, and need them to be very accurate. While VMs are used for distributed load networking.

Anonymous Coward says:

Re: Re: Re:9 Re:

Mainframes are one of the few systems that are quite poor for VMs.

VM’s were invented for mainframes, and have existed, and been used since at least the mid 1970s. Indeed with the modern Z systems are used to rum multiple different operating systems under a VM, and were designed with VMs in mind. Due to mainframe architecture, and especially there IO architecture, their native OS’s do do well on microprocessor VM’s, while Linux runs well on a mainframe VM.

Anonymous Coward says:

Re: Re: Re:4 Re:

You might want to check why 2038 is a problem before you say things that expose your ignorance. The issue is that Unix keeps track of time as the number of seconds from the start of 1970. And sometime in 2038, it will exceed 2^31-1 seconds (largest positive value for a signed 32 bit integer). So in an unpatched system, it will overflow to -2^31 seconds, or in other words, sometime in 1901. By upgrading to a 64 bit system, that overflow issue is pushed forward to sometime after the year 292 billion, or about 20 times the known age of the universe into the future.

Anonymous Coward says:

Re: Re: Re:2 Re:

Other programs are going to wrap around dates in 2038 (or so) to 1938 as systems could not upgrade to 4 digits and they are still being used today

That’s not how the 2038 problem works. It’s caused by a completely different time representation that’s based on counting seconds from the Unix epoch, not on 2-digit years, and the wraparound point is 1901, not 1938.

Rekrul says:

Re: Re: Re:2 Re:

Hate to say it but we have more versions of the Y2K bug coming.

I’ve heard about this before, but I’ve never been able to find a good explanation for WHY anyone would create a system like this.

"Hey Bob, we need a way to store the date and time in our new computer OS. Got any ideas? I was thinking maybe we use three bytes for the date, which will give us a range of about 45,000 years, and two bytes for the time. It will be simple and future-proof."

"Nah, we’ll pick an arbitrary date, and then calculate the time based on the number of seconds that have passed."

"Brilliant!"

Upstream (profile) says:

Re: Re: Re:3 Re:

I’ve heard about this before, but I’ve never been able to find a good explanation for WHY anyone would create a system like this.

Maybe there were completely different priorities and constraints in place when the Unix-epoch type of time systems were created? Computer RAM was much more limited and very expensive. Computers executed code drastically slower, so anything that cost extra CPU cycles was dreaded. Code was made as compact as possible, to minimize the size of the stacks of Hollerith cards, and the time it would take a program to execute.

Maybe a simple linear time system worked well with these priorities?

Maybe a somewhat more complicated (many would say absurd) 60 second per minute, 60 minute per hour, 24 hour per day, 7 day per week, 28/29/30/31 day per month and 12 month per year system conflicted with some of these priorities?

Today hundreds / thousands of lines of code and millions of CPU cycles every now and then may seem trivial and insignificant, but it was not always so.

Anonymous Coward says:

Re: Re: Re:4 Re:

Today hundreds / thousands of lines of code and millions of CPU cycles every now and then may seem trivial and insignificant, but it was not always so.

Except that converting seconds elapsed to a usable date requires an entire set of calculations. (Simple versions would use division here which is very costly. A more optimized approach would have used multiplication, but would still require multiple rounds to make a usable timestamp.) Whereas a proper implementation of this, (proper back in the time of DOS anyway), would have been directly accessible with a simple memory address fetch by any program. With timezone conversions being optional for most systems. (Again, at the time.)

There really isn’t a good reason for this to not exist today beyond tradition and the cost to switch. (The vast majority of which would actually be on the heads of the OS developers. Which isn’t much as they can use API layers to hide the transition. As only the OS has direct access to the RTC in a modern system.)

Rocky says:

Re: Re: Re:3 Re:

  1. Using 5 bytes is inefficient both to store and process, better to use multiples of 2.
  2. Using an arbitrary date and time as a zero-reference solves some other problems like for example "this event happened at 17309900 unix epoc" which when localized gives your local time for it regardless of what messed up timezone you live in, ie the unix epoch is absolute, a date-time isn’t necessarily absolute.
  3. Working with date-time’s is difficult to get right which is why they should be stored and processed in a normalized format (ex unix epoch). The number of fails I’ve seen in code where people make faulty assumptions in dealing with date-times is mindboggling.
  4. Any code written is imbued with the coders assumption about it’s lifespan or oversight thereof, hence the original unix epoch is a 32-bit value.
Anonymous Coward says:

Re: Re: Re:4 Re:

Using 5 bytes is inefficient both to store and process, better to use multiples of 2.

Back in the days of COBAL, the extra byte needed for a multiple of 2 would have been measured in $$$.

Using an arbitrary date and time as a zero-reference solves some other problems like for example "this event happened at 17309900 unix epoc" which when localized gives your local time for it regardless of what messed up timezone you live in, ie the unix epoch is absolute, a date-time isn’t necessarily absolute.

That is a data presentation problem, not a processing one. The system described above works just as well for your use case. 1/1/2021 11:00:00:00 UTC converts just as well to and from any other timezone. If you needed something more accurate you really shouldn’t be relying on the average RTC chip in the first place, and / or your system doesn’t include good enough synchronization mechanisms to track it’s inputs at the accuracy required for it’s intended use. That’s a design failure, and I’d imagine you’ll have additional problems as a result in that case. This is true today just as much as it would have been back then.

Working with date-time’s is difficult to get right which is why they should be stored and processed in a normalized format (ex unix epoch). The number of fails I’ve seen in code where people make faulty assumptions in dealing with date-times is mindboggling.

Again, data presentation problem. Data at rest should be encoded in a way that it’s meaning isn’t lost due to being stored. If you have timestamps that aren’t convertible to what ever you need now, that means you didn’t store them accurately enough in the first place. The method above doesn’t need encoding. It’s literally stored as the date as a human would use. It may need conversion if timezones get involved, but back in the days of COBAL that wasn’t as likely as it is today. Again, that conversion should be indefinitely repeatable regardless of storage method.

Any code written is imbued with the coders assumption about it’s lifespan or oversight thereof, hence the original unix epoch is a 32-bit value.

Any sane designer whether in the days of COBAL or today should know about bean counters and their desire to push things to the absolute failure point before justifying any expenditure.

Speaking of COBAL, the state of Kentucky still uses it for their unemployment system. I’m sure that system is well past it’s expiration date as envisioned by it’s creators. Guess what? Even a global pandemic that brought the system to it’s knees multiple times wasn’t enough to justify it’s replacement in the eyes of the bean counters. Rather they were too busy trying using said obsolete system as an excuse for their own failures. (An excuse I’m sure they’ll revisit and reuse in the future.)

Any modern programmer who thinks otherwise is just kidding themselves at this point. As well as creating future problems that will cost a boat load of time and money to fix.

Rocky says:

Re: Re: Re:5 Re:

That is a data presentation problem, not a processing one. The system described above works just as well for your use case. 1/1/2021 11:00:00:00 UTC converts just as well to and from any other timezone.

No, it’s not a data presentation problem. It’s a people make assumptions problem, which is why you remove the possibility of assumptions as far as you can. Using unix epoch or UTC is one such way, and systems today have support for it. Plus, the thing about timezones, they change. A older date-time in an odd TZ that has changed will be incorrect if converted to unix epoch or UTC.

Again, data presentation problem. Data at rest should be encoded in a way that it’s meaning isn’t lost due to being stored. If you have timestamps that aren’t convertible to what ever you need now, that means you didn’t store them accurately enough in the first place. The method above doesn’t need encoding. It’s literally stored as the date as a human would use. It may need conversion if timezones get involved, but back in the days of COBAL that wasn’t as likely as it is today. Again, that conversion should be indefinitely repeatable regardless of storage method.

Not really. Storing dates in human readable format (ie as a string) can be done when that date will never be used again (see earlier explanation for the reason), but if the date is going to be correctly used for comparison or if you need to perform operations on it you never store it in a human readable format – you store it in a normalized format a computer can work with easily without introducing errors and ambiguity. The only time it’s relevant to convert it to human readable format is when a human wants to read it.

Any modern programmer who thinks otherwise is just kidding themselves at this point. As well as creating future problems that will cost a boat load of time and money to fix.

A smart programmer tries to make sure other peoples assumptions doesn’t fuck things up, now and in the future.

PaulT (profile) says:

Re: Re: Re:3 Re:

"I’ve heard about this before, but I’ve never been able to find a good explanation for WHY anyone would create a system like this."

Really? It’s quite simple. UNIX time codes are stored in a single integer known as the epoch time code. This starts at 1st January 1970. The problem is that due to limitations at the time UNIX was created, this is a 32 bit integer, which means it will max out (and thus create the same rollover problem that Y2K had) in 2038.

Newer 64 bit versions of UNIX based OSes don’t have this limitation as they’re naturally able to increase that limit to 64 bit and the number allowed there is trillions of years if not many more.

If you actually look at what the problem why it exists and why the setup is like it is, then it’s quite obvious what decisions were made and why. It’s also obvious what the fixes are before the date comes if people are so inclined, it’s purely down to limitation that existed in the past that are not problems now.

There are some other issues mentioned which are essentially people kicking the Y2K can down the road, such as extending the rollover date a few decades, but the best known and most overall dangerous one is the epoch time issue,

Scary Devil Monastery (profile) says:

Re: Re: Re:3 Re:

"I’ve never been able to find a good explanation for WHY anyone would create a system like this."

No one with any experience in programming ever would. Today. This format, though, was implemented way back when IBM was still thinking 64k was still excessive memory for the glorified calculator and word processor they imagined was all the PC was good for.

PaulT (profile) says:

Re: Re: Re:4 Re:

"No one with any experience in programming ever would. Today."

Nobody’s making software with these bugs today. You’d have to go far out of your way to introduce them on today’s hardware.

"This format, though, was implemented way back when IBM was still thinking 64k was still excessive"!

The epoch bug has nothing to do with IBM and in fact occurred well before they released their first PC. It’s quite simple – the limitations of hardware at the time meant that compromises had to be made, with the understanding that 64 bit systems would be common long before 2038. This is true. The problem now is getting the beancounters to sign off on expensive fixes for legacy code to fix an issue that won’t happen for another 17 years. As with Y2K, the problem will naturally be fixed as people upgrade systems, then there will probably be a last minute round of scaremongering to inspire the remaining holdouts.

Anonymous Coward says:

Re: Re: Re:3 Re:

"Nah, we’ll pick an arbitrary date, and then calculate the time based on the number of seconds that have passed."

It’s actually the number of seconds that have passed, minus the number of completed positive leap seconds (and, presumably, plus the number of completed negative leap seconds). That means that timestamps sometimes decrease as time goes on, making it extra fun.

Bobvious says:

Re: Y2K

"To: All personnel
From: Computer Systems Department
Subject: Y2K

Our staff has completed 18 months of work on-time and on-budget. We have gone through every line of code in every program in every system. We have analyzed all databases, all data files, including backups and historic archives, and modified all data to reflect this change. We are proud to report that we have completed the Y-to-K date change mission and have now implemented all changes to all programs and all data to reflect the new date standards as follows:

Januark, Februark, March, April, Mak, June, Julk, August, September, October, November, December

As well as:

Sundak, Mondak, Tuesdak, Wednesdak, Thursdak, Fridak and Saturdak.

I trust that this is satisfactory, because to be honest none of this Y-to-K date change has made any sense to me. But I understand it is a global problem, and our team is glad to help as usual in any way possible."
https://www2.lbl.gov/Science-Articles/Archive/y2k-problem-solved.html

Upstream (profile) says:

Could this be yet another object lesson that anything that relies on a central server system or proprietary code is not really owned, but just on loan from the tech overlords?

Sometimes these issues can be overcome, but often it seems it is just not worth the effort. Even if these obstacles are overcome, there remains the problem of possible legal action on the part of the tech overlords for some real or perceived copyright or patent violation.

Anonymous Coward says:

its even worse then that, as games on disk will not work either,
imagine playing a game in 5 years ,it wont work because it cant check the server trophy status .
for a single player game ,wtf.with no online multiplayer,
eg god of war
i can play games on my 360 without using an internet connection,
eg single player games.

Upstream (profile) says:

Re: Re:

Please forgive my lack of knowledge of the details of the subject. Apparently I was born without the computer game gene. I am completely unfamiliar with these devices and the games that run on them. What little I do know about them comes from articles like this one, which usually explain how some detail* is causing or might cause the whole system to fail, effectively bricking the devices and leaving the "owners" with no recourse.

*Proprietary code that can be bricked with a forced update, or a server that can be taken off-line, or similar.

This comment has been deemed insightful by the community.
Paul B says:

Re: Re: Re:

The system needs to call home sometimes. For example when the battery gets replaced. Failure to do so means the system becomes a brick.

Home is going offline, and home cant be replaced by anyone but Sony. So when all batteries die all old systems stop working for good and lawsuits might happen.

Anonymous Coward says:

Re: Re:

The problem exists, and it’s yet another reminder that you don’t own what you buy, but it’ll be years before it starts to bite anybody.

If I were to tell you, you will have cancer in 10 years. Would you have the same laissez faire reaction?

The amount of time before an issue becomes problem doesn’t justify the problem’s existence. Especially when that amount of time was vast. I.e. The problem was known about before hand and nothing was done about it.

People have reason to be upset. Handwaving it off as "yet another…." just breeds contempt and allows those causing the problems to get away with doing so in the future.

Anonymous Coward says:

Re: Re: Re:

The problem was known about before hand and nothing was done about it.

People have reason to be upset.

Sure, but they are the ones who didn’t do anything about it. They eagerly paid for locked-down systems, bought games that had user-hostile anti-copying technology and required network connections, etc. For fuck’s sake, this is Sony—they were known for lock-in and overpricing in the ’80s.

Simply being upset won’t accomplish anything. People need to stop rewarding this behavior. At an absolute minimum, nobody should be able to sell copies of a game without a legally-binding promise that the game will be open-sourced within 10 years (i.e., nobody should be willing to play games without such a guarantee).

If I were to tell you, you will have cancer in 10 years. Would you have the same laissez faire reaction?

I don’t understand what you’re trying to say. Isn’t 10 years of remaining life expectancy pretty good for a person with incurable cancer?

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Citation Needed

"Without a reference to the information about the CMOS, this is conjecture."

Uncharacteristically, it seems that the article has neglected to link to the source being quoted. However, it’s easily found by searching for the text, which appears to be from this source: https://arstechnica.com/gaming/2021/04/the-looming-software-kill-switch-lurking-in-aging-playstation-hardware/

"The original TechDirt post said players will still be able to re-download already purchased games, just not buy new ones."

For now. The issue is that little concern is made about legacy hardware. At some point it’s likely that the servers involved in those checks will be shut down. Once that happens only people who have already downloaded games will be able to play them. This is already known. But, now we have the extras wrinkle that even people who have made sure they downloaded the games before the servers were shut down will eventually lose access to the games when the CMOS battery dies. We’ve gone from "we’re no longer selling the games but you can keep using them" to " one day, guaranteed, you will not be able to access what you paid for".

Thad (profile) says:

Re: Citation Needed

The headline, as I said above, incorrectly implies that the problem is imminent. It isn’t, but that doesn’t mean it won’t happen eventually. You don’t think those servers will stay up forever, do you?

I’ve got games from 30 years ago. I plug the game into the console, I turn the console on, the game works. Do you think I’ll be able to say the same for my PS4 collection in 30 years?

I think that’s a timescale we should be aware of. Not whether it’s "about to" become a problem — it isn’t — but whether it will eventually — it will.

Phoenix84 (profile) says:

Re: Re: Citation Needed

I still have all my SNES games, but sadly, my SNES died. Apparently one of the chips in there is faulted.
It wasn’t intentional on Nintendo’s part (afaik), but I can no longer play those games.
Even if the CMOS battery wasn’t a problem, eventually the hardware will otherwise succumb to various faults and die.
That’s the problem with consoles in general, and until they all go away, this will always be a problem, just different timing.

PaulT (profile) says:

Re: Re: Re: Citation Needed

"It wasn’t intentional on Nintendo’s part (afaik), but I can no longer play those games."

Erm, no. You can absolutely still play those games, you just have to buy another SNES. Or, there is other hardware available to allow you to use the cartridges on other devices (unofficial, of course, but that doesn’t matter since no SNES game needs to register your details or phone home to get permission).

That’s a totally different scenario to the PS3 situation described where you’re already unable to transfer the purchase to another console, so you won’t have any access to the game once the hardware dies.

ECA (profile) says:

when the internet

the (kinda) stupid thing I noticed about security was USING THE NET TO VERIFY the consoles programs, the machine, the OS, the updates.
Anyone have a tablet with Android on it, a few games or programs? and NOT let it on the Net for any period of time?
IT DONT LIKE IT MUCH.

NOW having a Console that Really likes to be connected to the net, wants to be connected for its Own protection. Is abit MEAN.
Isnt this a Big reason to have internet to EVERYONE?? Would it be nice if Sony would help pay for that(LMAO)

This comment has been deemed insightful by the community.
Anonymous Coward says:

Sony has an opportunity to fix the issue they caused here, which is an issue if you don’t connect to the internet, never mind whether their time-check server is shuttered. So they can also pack the update with whatever other crap they like. It would also likely require updating a PS(n) to the latest firmware/software otherwise – which some people don’t want.

Maybe 3rd party jailbreaking and a firmware fix would be the way to go. Not sure, the CMOS died in my PS3 ages ago, never connected to the net except when i really needed an update to play/patch a game, and i only every played discs and had no account. But that’s me. Other people have tons of downloaded games.

Anonymous Coward says:

A little more backwards compatibility and there wouldn’t be a need to keep ancient hardware at all. PS5 supporting PS4 games is a good start, but no need to stop there. The PS5 is well powerful enough to emulate PS3 games even if hardware compatibilty isn’t possible.

But again, it’s third party emukators on PC that pick up the slack there, too.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...