GCHQ Propose A 'Going Dark' Workaround That Creates The Same User Trust Problem Encryption Backdoors Do

from the wiretaps-but-for-Whatsapp dept

Are we “going dark?” The FBI certainly seems to believe so, although its estimation of the size of the problem was based on extremely inflated numbers. Other government agencies haven’t expressed nearly as much concern, even as default encryption has spread to cover devices and communications platforms.

There are solutions out there, if it is as much of a problem as certain people believe. (It really isn’t… at least not yet.) But most of these solutions ignore workarounds like accessing cloud storage or consensual searches in favor of demanding across-the-board weakening/breaking of encryption.

A few more suggestions have surfaced over at Lawfare. The caveat is that both authors, Ian Levy and Crispin Robinson, work for GCHQ. So that should give you some idea of which shareholders are being represented in this addition to the encryption debate.

The idea (there’s really only one presented here) isn’t as horrible as others suggested by law enforcement and intelligence officials. But that doesn’t mean it’s a good one. And there’s simply no way to plunge into this without addressing an assertion made without supporting evidence towards the beginning of this Lawfare piece.

Any functioning democracy will ensure that its law enforcement and intelligence methods are overseen independently, and that the public can be assured that any intrusions into people’s lives are necessary and proportionate.

By that definition, the authors’ home country is excluded from the list of “functioning democracies.” Multiple rulings have found GCHQ’s surveillance efforts in violation of UK law. And a number of leaks over the past half-decade have shown its oversight is mostly ornamental.

The same can be said for the “functioning democracy” on this side of the pond. Leaked documents and court orders have shown the NSA frequently ignores its oversight when not actively hiding information from Congress, the Inspector General, and the FISA court. Oversight of our nation’s law enforcement agencies is a patchwork of dysfunction, starting with friendly magistrates who care little about warrant affidavit contents and ending with various police oversight groups that are either filled with cops or cut out of the process by the agencies they nominally oversee. We can’t even get a grip on routine misconduct, much less ensure “necessary and proportionate intrusions into people’s lives.”

According to the two GCHQ reps, there’s a simple solution to eavesdropping on encrypted communications. All tech companies have to do is keep targets from knowing their communications are no longer secure.

In a world of encrypted services, a potential solution could be to go back a few decades. It’s relatively easy for a service provider to silently add a law enforcement participant to a group chat or call. The service provider usually controls the identity system and so really decides who’s who and which devices are involved – they’re usually involved in introducing the parties to a chat or call. You end up with everything still being end-to-end encrypted, but there’s an extra ‘end’ on this particular communication. This sort of solution seems to be no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorise today in traditional voice intercept solutions and certainly doesn’t give any government power they shouldn’t have.

We’re not talking about weakening encryption or defeating the end-to-end nature of the service. In a solution like this, we’re normally talking about suppressing a notification on a target’s device, and only on the device of the target and possibly those they communicate with. That’s a very different proposition to discuss and you don’t even have to touch the encryption.

Suppressing notifications might be less harmful than key escrow or backdoors. It wouldn’t require a restructuring of the underlying platform or its encryption. If everything is in place — warrants, probable cause, exhaustion of less intrusive methods — it could give law enforcement a chance to play man-in-the-middle with targeted communications.

But there’s a downside — one that isn’t referenced in the Lawfare post. If both ends of a conversation are targeted, this may be workable. But what if one of the participants isn’t a target? This leaves them unprotected because the suppressed messages wouldn’t inform other non-target parties the conversation isn’t protected. Obviously it wouldn’t do the let anyone targets converse with know things are no longer normal on the target’s end, as it’s likely one of those participants will let the target know they’ve encountered a security warning while talking to them.

In that respect, it is analogous to a wiretap on someone’s phones. It will capture innocent conversations irrelevant to the investigation. In those cases, investigators are told to stop eavesdropping. It’s unclear how the same practice will work when the communications are being harvested digitally via unseen government additions to private conversations.

This proposal seems at odds with the authors’ suggested limitations, especially this one:

Any exceptional access solution should not fundamentally change the trust relationship between a service provider and its users.

When a service provider starts suppressing warning messages, the trust relationship is going to be fundamentally altered. Even if users are made aware this is only happening in rare instances involving targets of investigations, the fact that their platform provider has chosen to mute these messages means they really can’t trust a lack of warnings to mean everything is still secure.

On the whole, it’s a more restrained solution than others have proposed — but it still has the built-in exploitation avenue key escrow does. It’s better than a backdoor but not by much. And the authors of this proposal shouldn’t pretend the solution lives up to the expectations they set for it. Their own proposal falls short of their listed ideals… and the whole thing is delivered under the false pretense law enforcement/intelligence agencies are subject to robust oversight.

Filed Under: , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “GCHQ Propose A 'Going Dark' Workaround That Creates The Same User Trust Problem Encryption Backdoors Do”

Subscribe: RSS Leave a comment
48 Comments
That One Guy (profile) says:

'We promise we totally won't abuse THESE tools.'

Any functioning democracy will ensure that its law enforcement and intelligence methods are overseen independently, and that the public can be assured that any intrusions into people’s lives are necessary and proportionate.

I’m not sure if they’re trying to open with a joke here, or making it crystal clear that they’re talking about a purely hypothetical situation that has absolutely nothing to do with the ones who pay their salaries. Either way nice of them to start out with a statement making clear that what follows will have only a vague connection to the real world I suppose.

Any exceptional access solution should not fundamentally change the trust relationship between a service provider and its users.

If they really think that ‘your communications can be monitored as soon as someone brings the right paperwork, whether because you are of interest or simply communicating with someone who is, and you’ll have no idea it’s happening‘ wouldn’t impact the ‘trust relationship between a service provider and it’s users’ they’re not just naive they’re downright delusional.

Ultimately however they face a problem of their own making here, in that with the multitude of abuses and history of secrecy tainting the agency(and the US equivalent, as noted in the article), the fact that this ‘suggestion’ may not be as bad as crippling encryption does not mean I want them to be able to use it, because I have zero trust that they won’t abuse it, just like they have previous tools.

If, as has been demonstrated to be the case, they can’t be trusted with what they already have, why should they be granted more?

btr1701 (profile) says:

Re: 'We promise we totally won't abuse THESE tools.'

Not sure I see what the big deal is here. As the article notes, this is really no different than a traditional wiretap, which we’ve been doing for decades, and which have been well-established as consistent with the 4th Amendment when supported by warrants and probable cause. And it doesn’t break the encryption, so there’s no worry that some bad guy might get hold of a leaked key or something.

Anyone who objects to this is basically just objecting to law enforcement in general at this point, and that’s not a rational position to take.

PaulT (profile) says:

Re: Re: 'We promise we totally won't abuse THESE tools.'

“As the article notes, this is really no different than a traditional wiretap”

In principle, yes. In practical terms, not really.

“And it doesn’t break the encryption”

You’re missing the point – that’s exactly what it will do.

It won’t necessarily break the kinds of encryption where two end users communicate with keys held by the provider. It will, however, completely break the types of encryption that are more complex or which have been specifically designed so that the provider themselves do not have access mid communication. It will prevent those types of encryption being developed (well, except by the “bad guys”, of course), and will likely hinder patching of issues found with existing encryption if they best require the removal of this ability.

They are literally asking that a man in there middle attack – one thing that encryption is specifically designed to prevent – be allowed and part of the reasoning is that they can do it with technology invented before modern encryption was invented. To reuse an analogy I’ve used elsewhere, that’s like asking biometric locks to still be opened with a metal master key because they do that with other locks, and then complaining when people explain why this is not a good idea.

They’re still asking for back doors and for encryption to be made insecure in order that they have a slightly easier job of spying on people, they’ve just asked a little more nicely this time.

“Anyone who objects to this is basically just objecting to law enforcement in general at this point”

You mean “understands encryption and they ways it will be compromised by legal restrictions such as the ones suggested”.

Mike Masnick (profile) says:

Re: Re: 'We promise we totally won't abuse THESE tools.'

Not sure I see what the big deal is here. As the article notes, this is really no different than a traditional wiretap,

Uh, it’s very different in that it adds a direct vulnerability into the encryption and assumes that won’t get exploited.

And it doesn’t break the encryption

Yes, that’s literally what it does. It is adding a way to get into an encrypted conversation, which is literally breaking encryption.

Uriel-238 (profile) says:

Re: Re: Traditional wiretaps

We may be used to traditional wiretaps at this point, but they were a pretty major deal when law enforcement wanted to start listening in on private conversations (or worse, on business conversations).

We may resigned to the feds wiretapping our phones but that should not be confused with the idea that we consented to them wiretapping our phones.

Martin Luther King Jr. and Phillip K. Dick certainly didn’t consent.

Anonymous Coward says:

Re: Re:

You mean like career military spy General Michael Flynn, who worked his way up to be Assistant Director of National Intelligence?

By far the worst part of these de facto wiretaps is that it easily sets people up for “perjury traps” whenever people fail to remember everything in a recorded conversion. Wiretaps should only ever be used for solving specific serious crimes, not as a general purpose way to convict people who have comitted no other crime other than having an imperfect memory (whether by accident or design).

Stephen T. Stone (profile) says:

Re: Re: Re:

the worst part of these de facto wiretaps is that it easily sets people up for "perjury traps" whenever people fail to remember everything in a recorded conversion

A difference exists between forgetting a few details here and there and outright lying about what was (or was not) said. Having a fuzzy memory is not the same thing as committing perjury.

Anonymous Coward says:

Re: Re: Re: Re:

A skilled prosecutor armed with recorded conversations can trap almost anyone into committing perjury. A person’s pride can be used as a tool against him, and even a person’s own fear of committing perjury can be used against that person by skillfully crafting and arranging questions to gradually lead a person by the nose down a preset path, that eventually forces him to either admit he misspoke earlier (and basically admits perjury) or continue to go down that path, eventually outright lying in order to cover up earlier innocent gaffs or agreeable conversation habits.

It’s easy to see that anyone with any idea of how the legal system really works either refuses to talk or feigns amnesia from start to finish. People who think they have nothing (or little) to hide and choose to cooperate with authorities (sometimes unknowingly) can be setting themselves up for some real pain down the road.

Thad (user link) says:

Re: Re: Re: Re:

Having a fuzzy memory is not the same thing as committing perjury.

It can be.

You won’t see me showing any sympathy to Flynn, but there are serious problems with criminalizing lying to the FBI. Ken White has discussed them on multiple occasions; here are a few examples:

Everybody Lies: FBI Edition, Popehat

Trump Investigation shows how easy it is for feds to create crimes, National Review

Donald Trump shouldn’t talk to the feds. And neither should you.

Trump’s problem isn’t that he has bad lawyers. It’s that he’s a bad client., Washington Post

And not directly related but along the same lines: Witnesses ‘flipping’ does corrupt justice. But not because they’re ‘rats.’, New York Times

As always, we have to consider that methods the criminal justice system uses to prosecute the guilty can also be used to persecute the innocent. The anon is right that the FBI can trick nervous people into perjuring themselves. That’s a serious problem, even though you’re right that that’s not what happened with Flynn.

James Burkhardt (profile) says:

Re: Re: Re:2 Re:

Except, Most if not all of those articles describe issues of ‘lying to a federal official’, I.E. obstruction, rather than perjury, a crime committed by lying under oath. While it is possible to trick a person into lying on the stand, it is my understanding that perjury requires that the lie be willful, that is the lie is made with the intent to lie, a required factor missing from obstruction cases. The concern is obstruction in these cases, not perjury.

Thad (profile) says:

Re: Re: Re:3 Re:

That’s a fair point, but the phrase “perjury trap” in its common recent usage is specifically a reference to the Mueller investigation and the probability that, were Trump to submit to direct questioning, he would lie, and then that would be a crime. Whether or not that crime is technically perjury isn’t really the point.

Anonymous Coward says:

It’s better than a backdoor but not by much.

No its not because it requires that the service provider and not the user controls the keys. Also, it implies that the encryption application should not list the keys it is using.

If somebody else is controlling your encryption system, it is totally broken, and not fit for purpose.

Anonymous Coward says:

Re: Re:

No its not because it requires that the service provider and not the user controls the keys. Also, it implies that the encryption application should not list the keys it is using.

In other words, two backdoors. Good end-to-end encryption will let users do some kind of out-of-band key verification, and will warn when keys change.

Anonymous Coward says:

Unicorns

So, the GCHQ has re-invented the unicorn, once again. What they are proposing is known as man-in-the-middle interception. But the only way that is useful is if you also have the keys to also decrypt the conversation. And by definition, if those keys are possessed by others than the intended parties, then the encryption is broken.

GCHQ knows this full well. So, it sounds like the GCHQ is proposing to setup the whole system, minus the keys, at first. Then they can come back later and say “You know, this just isn’t really working out the way we need it to. We just need to make one little, itsy-bitty change to fix it. All we need are the keys”.

One step at a time to get what they want.

Anonymous Coward says:

Re: Unicorns

But the only way that is useful is if you also have the keys to also decrypt the conversation.

No, it actually doesn’t require this, as this exploit occurs before the encryption keys are exchanged.

The first step in end-to-end encryption is the key exchange. The problem, of course, is that prior to the key exchange there is no way to tell who is at the other end of the communication, e.g. no way of knowing whose key you are receiving. So the question is, how can we verify the identity of the owner of the key we have received?

In the case of internet services, this step is performed by certificate authorities which (for example) verify that the public key you have received when you go to Facebook.com actually belongs to Facebook . This is, of course, only as trustworthy as the certificate authority in question.

However, those big certificate authorities very rarely hold certificates for individuals (as a far greater likelihood of error for individuals would erode trust in their core business), nor do most service providers allow this anyway. Instead, when someone wishes to speak to you, the service provider establishes the initial connection for key exchange, thus acting as the certificate authority and verifying the identity of each party themselves. And similar to above, the identity of each party is only as trustworthy as the service provider.

Now this proposal simply states that the service provider (who is already providing identity verification on their encrypted service) should lie about this identity verification when requested by law enforcement. You then, based on this lie, exchange public keys with the police. The police still cannot decrypt messages sent with your public key or your recipients public key, but it no longer matters because you have been tricked into sending the messages to them using their own public key. This is, theoretically, not any kind of new weakness in the system, as this is already perfectly possible to do.

This should be of limited effectiveness, as key exchange should only ever happen once, thus if you communicated with people before the police became interested, all future communications will remain secure (only communications with new people would be vulnerable). However, because most services allow you to add new people to existing conversations (thus requiring a new key exchange with the new person), it might be possible for the service provider to trick your system into adding a new person to the group (exchange public keys with them) without actually notifying you that this has occurred. Since the system is designed to keep all members up to date, your own system would then use your private key to decrypt messages, encrypt them with the police’s public key, and send them to the police while the service provider prevents this activity from showing up on your system.

It’s actually quite an elegant solution in theory (as identity verification is not a new vulnerability in the system), though it still has far too many problems to be workable in practice.

Anonymous Coward says:

Re: Re: Unicorns

This should be of limited effectiveness, as key exchange should only ever happen once, thus if you communicated with people before the police became interested, all future communications will remain secure (only communications with new people would be vulnerable).

You have overlooked the big hole they are demanding, changing the keys used without notifying the user. That means if they want in on a conversion, their key is added, or the user keys are changed without the users being notified; which implies that any look at the keys being used is removed from encryption enables applications/.

Anonymous Coward says:

Re: Re: Re: Unicorns

You have overlooked the big hole they are demanding, changing the keys used without notifying the user.

"it might be possible for the service provider to trick your system into adding a new person to the group (exchange public keys with them) without actually notifying you that this has occurred."

Nope, included right there.

Anonymous Coward says:

Re: Re: Unicorns

The police still cannot decrypt messages sent with your public key or your recipients public key, but it no longer matters because you have been tricked into sending the messages to them using their own public key.

At which point they then "have the keys to also decrypt the conversation", which you started off denying.

PaulT (profile) says:

“You end up with everything still being end-to-end encrypted, but there’s an extra ‘end’ on this particular communication”

…which means that it’s no longer “end to end” but either a multicast communication or by definition compromised with a man in the middle exploit – which is the kind of thing that encryption is designed to stop in the first place!

“That’s a very different proposition to discuss and you don’t even have to touch the encryption.”

Except, of course, encryption has increasingly been designed so that the service provider does not hold the keys and cannot provide them mid communication. For those services, you would either have to completely redesign the encryption, or build in a client-side backdoor. Which makes them less secure than even an encryption backdoor would.

It’s funny. Whenever these people explain how things are easily possible, they usually end up describing things that cannot be implemented.

Matthew Cline (profile) says:

Re: Re:

Chat with end-to-end encryption already relies on something like a certificate authority, so that if Alice wants to chat to Bob she can get his public key. Tampering with the certificate authority server would allow for Eve to masquerade as Bob. The proposal is to allow law enforcement to do such tampering, plus something like the following:

If the chat app has a configuration option to let Alice say that she trusts Bob so much that he can automatically join in on any existing group chat, the app should be changed so there’d be a way so Bob can silently join that existing chat, without notifying anyone already in the chat that someone new has joined. Also the app should be changed so that Bob can join the chat multiple times without alerting any user that there appears to be multiple simultaneous instances of Bob.

PaulT (profile) says:

Re: Re: Re:

“The proposal is to allow law enforcement to do such tampering”

Exactly. Which means that they have to either insert a back door and/or force the service provider to have access that they have deliberately designed the system not to give them.

The group chat things seems to be a waste of time to my mind too. I might be wrong, but I’d suspect any competent criminals are avoiding group chats anyway to avoid detection. They would certainly be avoiding such things once it’s revealed that providers are being forced to redesign apps to do such things (and it absolutely would be revealed).

I might be giving the potential criminals too much credit, but this seems to be yet another example of something that would be much more useful for abuse than it would be in actually catching the “bad guys”.

Anonymous Coward says:

Re: Re: Re: Re:

Which means that they have to either insert a back door and/or force the service provider to have access that they have deliberately designed the system not to give them.

In most cases, they already have this access. Most chat providers act as the certificate authority within their ecosystem (partly due to convenience, partly because no large certificate authorities are going to risk their core business by attempting to verify the identity of hundreds of millions of individuals). Since they are already the certificate authority, they already control the databases containing information on identification of individuals for key exchange, and can already change that database at will. There’s no change in the system required, only in the perceived trustworthiness of the provider.

The group chat things seems to be a waste of time to my mind too. I might be wrong, but I’d suspect any competent criminals are avoiding group chats anyway to avoid detection.

For most chat providers, there is no functional difference between group chats and individual chats, individual chats are just group chats that currently contain only two members.

As for how this is advantageous, in theory key exchange would only need to occur once (at the beginning of the chat), which means that a certificate authority exploit (as discussed) would only be effective on chats which began after the police became interested. Any chats already going on would no longer need the certificate authority, and therefore remain secure. However, most chats not only allow people to be added to existing chats, but also update them on the chat history after they’re added. That is, when you add someone to a chat, the encrypted historical contents are decoded using your private key and then encrypted using their public key and sent to them. Thus, this gives the ability to obtain information on existing chats, as opposed to only getting information on new chats.

This is where a new backdoor would need to be inserted in most systems, as addition of new members "shouldn’t" be possible from the provider side.

But the basic idea of undermining the certificate authority would require no changes to the existing system, identity verification is still one of the biggest problems in encryption in general. Certificate authorities can compromise communications by changing the identity attached to a public key because changing the identity attached to a public key is quite literally their job. It’s just that their job is generally to change it correctly on behalf of their users, rather than incorrectly on behalf of law enforcement.

PaulT (profile) says:

Re: Re: Re:2 Re:

“In most cases, they already have this access”

Yes, and this forces services that deliberately don’t do this to follow suit, reversing recent trends and placing their users at greater risk than with their current design.

You seem to be missing the point – whether or not what you describe is the norm, what GHCQ are proposing is to make it illegal to operate in any other way.

“There’s no change in the system required, only in the perceived trustworthiness of the provider.”

With SOME providers, yes. With others, it involves a complete redesign to deliberately make their service less secure. Are you following that yet?

“For most chat providers”

Again, you seem to be stating the correct facts, but missing the overall point. You keep qualifying your words with “most”, meaning that you’re aware that there are providers who do not fit into the description and will be destroyed / irrevocably changed by these rulings if they came to pass.

Even if they’re a minority, the fact that you admit there are some providers who do not operate in the way that GCHQ operates confirms that point that what they are saying is a lie. You can correctly say that “most” physical doors require a metal key. But, if you demand that police have to be supplied with a master metal key for every lock, you’re going to cause some major problems for people who manufacture and use biometric and combination locks, even if statistically they’re in the minority.

Uriel-238 (profile) says:

Re: Re: Re: "Giving the potential criminals too much credit"

I find myself offended with the basic mistakes that willful criminals (that is, those that plan and intend to commit a crime) make, the lack of track-covering on their behalf.

It may also be that there is a lot of crime and the police mostly pursue low-hanging fruit, that is crime that is easy to catch (and crime committed by less aggressive suspects) when they’re not forced by the high publicity of an incident.

That said, I’ve pointed out already they don’t use their current tools right (such as their $2 field drug test kits that test positive to glazed sugar and cotton candy) and our law enforcement needs to learn how to do its job professionally and competently before we yield to them even more of our privacy.

That One Guy (profile) says:

Re: Re: Re:2 "Giving the potential criminals too much credit"

That said, I’ve pointed out already they don’t use their current tools right (such as their $2 field drug test kits that test positive to glazed sugar and cotton candy) and our law enforcement needs to learn how to do its job professionally and competently before we yield to them even more of our privacy.

I’d say that that would be the starting point to giving them more tools, a demonstration that they can act responsibly with what they already have, but even then I’d still want a very strong showing that what they are asking for was not only necessary but the least intrusive option reasonably available, along with an extensive weighing of how it can be abused and what steps would be taken to mitigate/eliminate/punish abuse.

Do all that and then I’d consider allowing them more tools, but given they can’t even get past the first step I’m not holding my breath.

PaulT (profile) says:

Re: Re: Re:2 "Giving the potential criminals too much credit"

“It may also be that there is a lot of crime and the police mostly pursue low-hanging fruit, that is crime that is easy to catch”

That’s absolutely my take. Properly organised criminals often take a long time to get caught, specifically because they avoid making basic mistakes. Sometimes they just get caught because they make some basic mistakes in moments of weakness, but the real bad guys often don’t get caught for decades.

That’s why it’s doubly offensive that they wish to weaken or destroy protections for everybody for a chance to catch criminals today. In reality, they might catch a few morons who decide to use known compromised tools, but in reality all they’ll do is give more access the guys they never catch, at the price of everybody’s rights.

That One Guy (profile) says:

Re: Re: Re: Re:

I might be giving the potential criminals too much credit, but this seems to be yet another example of something that would be much more useful for abuse than it would be in actually catching the "bad guys".

I’ve long been of the opinion that the only criminals stupid enough to be caught by things like this are likely too stupid to be of any real threat, making the gains/cost equation grossly disproportionate.

In exchange for snagging the low hanging fruit of the exceptionally stupid everyone is made less safe and secure, and yet agencies like the GCHQ see that as a good trade, because hey, it’s not like their security is being compromised.

Anonymous Anonymous Coward (profile) says:

Democracy

"Any functioning democracy will ensure that its law enforcement and intelligence methods are overseen independently, and that the public can be assured that any intrusions into people’s lives are necessary and proportionate."

To them the idea of democracy is ‘not only are we in charge, but we are benevolent (muhahahaha), to our own purposes, which is power, power, power’. To us, the idea of democracy is ‘the government is us’ (no matter how it is implemented, direct or representative) and we get to say what is ‘necessary and proportionate’. Their cognitive dissonance is outrageous and purposeful.

Anon says:

If were going back?

Why not go back even more years, to the days when conversations could not be monitored – because there were no hones or hidden microphones. Because there were no recordings, everything was hearsay. And yet, democracy worked then too – probably better than now… without such easy access to what people were saying, with whom, where they were precisely every walking second, what they spent their money on, or what entertainment they watched.

Anonymous Coward says:

any functioning democracy

Where can I find one of those? It’s like big foot or nessy, except like 10,000x more popular and less stigmatized right? So many people claim it’s tote’s for real, and they’ve seen it and know how to get there, but I mean holy crap is there allot of faked bs, creeps in monkey suits, dinosaurs, and potato footage…the fan clubs are utterly looney, and everybody’s got something to sell, idea or swag. So often it seams there’s a pseudo-functioning government, that’s clearly not democratic, or a democratic-ish one that obviously can’t function- surely it’s not mutually exclusive- I want to believe~!

Funny thing, hyperbole aside; With the right attitude, humbleness and humor- apathy and hope aren’t mutually exclusive either. *Through most of human history apathy has been rightly seen as a sign/symptom of mental illness; However, due to recent events, psychologist have been forced to re-evaluate this simplistic view and acknowledge the clear emerging consensus that the inverse is now undeniably true- at this juncture in history a chronic and acute absence of apathy is now clearly indicative of severe mental illness and probable social retardation- a chronic lack of apathy is increasingly becoming a severely debilitating condition, the national financial consequences poised to skyrocket into the trillions in the next few decades. It is expected the DSMVI shall officially declare the death of all meaning.

*facts contained in internet comments may in fact not be facts, but jokes that went over your head…

WhatsApp 2019 (user link) says:

WhatsApp 2019 will be more easy to use because you don’t need to be complicated to register your email account for the manufacture of a profile of WhatsApp, simply using the number itself for the manufacture of a WhatsApp account. And if someday they want to re-install WhatsApp again, then the user can simply enter the old phone number that is in use in WhatsApp before, so not complicated like other chat applications that use email. Because email sometimes also users also forget the password that is in use on the email. Download WhatsApp 2019 that allows the user, if you want to share an activity or fun thing you want to be shown to other people. Its use is quite easy, just a swipe to the left on the part of the menu chat WhatsApp, then the user will enter into the menu status updates. And to use video call on WhatsApp users simply enter the phone number of the person intended. then users only need to select video calls on WhatsApp, then users will be entered into a video call

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...