Glyn Moody’s Techdirt Profile


About Glyn Moody Techdirt Insider

Posted on Techdirt - 2 March 2021 @ 8:18pm

Not OK, Zoomer: Here's Why You Hate Videoconference Meetings -- And What To Do About It

from the fighting-fatigue dept

With much of the world in various states of lockdown, the videoconference meeting has become a routine part of many people's day, and a hated one. A fascinating paper by Jeremy Bailenson, director of Stanford University's Virtual Human Interaction Lab, suggests that there are specific problems with videoconference meetings that have led to what has been called "Zoom fatigue", although the issues are not limited to that platform. Bailenson believes this is caused by "nonverbal overload", present in at least four different forms. The first involves eye gaze at a close distance:

On Zoom, behavior ordinarily reserved for close relationships -- such as long stretches of direct eye gaze and faces seen close up -- has suddenly become the way we interact with casual acquaintances, coworkers, and even strangers.

There are two aspects here. One is the size of the face on the screen, and the other is the amount of time a person is seeing a front-on view of another person's face with eye contact. Bailenson points out that in another setting where there is a similar problem -- an elevator -- people typically look down or avert their glance in order to minimize eye contact with others. That's not so easy with videoconferencing, where looking away suggests lack of attention or loss of interest. Another problem with Zoom and other platforms is that people need to send extra nonverbal cues:

Users are forced to consciously monitor nonverbal behavior and to send cues to others that are intentionally generated. Examples include centering oneself in the camera's field of view, nodding in an exaggerated way for a few extra seconds to signal agreement, or looking directly into the camera (as opposed to the faces on the screen) to try and make direct eye contact when speaking.

According to Bailenson, research shows people speak 15% louder on videoconference calls compared to face-to-face interaction. Over a day, this extra effort mounts up. Also problematic is that it's hard to read people's head and eye movements -- important for in-person communication -- in a video call. Often they are looking at something that has popped up on their screen, or to the side, and it may be unclear whether the movement is a nonverbal signal about the conversation that is taking place. Another oddity of Zoom meetings is that participants generally see themselves for hours on end -- an unnatural and unnerving experience:

Imagine in the physical workplace, for the entirety of an 8-hr workday, an assistant followed you around with a handheld mirror, and for every single task you did and every conversation you had, they made sure you could see your own face in that mirror. This sounds ridiculous, but in essence this is what happens on Zoom calls. Even though one can change the settings to "hide self view," the default is that we see our own real-time camera feed, and we stare at ourselves throughout hours of meetings per day.

Finally, Bailenson notes that the design of cameras used for videoconferencing means that people tend to remain within a fairly tight physical space (the camera's "frustrum"):

because many Zoom calls are done via computer, people tend to stay close enough to reach the keyboard, which typically means their faces are between a half-meter and a meter away from the camera (assuming the camera is embedded in the laptop or on top of the monitor). Even in situations where one is not tied to the keyboard, the cultural norms are to stay centered within the camera's view frustrum and to keep one's face large enough for others to see. In essence users are stuck in a very small physical cone, and most of the time this equates to sitting down and staring straight ahead.

That's sub-optimal, because in face-to-face meetings, people move around: "they pace, stand up, stretch, doodle on a notepad, get up to use a chalkboard, even walk over to the water cooler to refill their glass", as Bailenson writes. That's important because studies show that movements help create good meetings. The narrow physical cone that most people inhabit during videoconferences is not just tiring, but reduces efficiency.

The good news is that once you analyze what the problems are with Zoom and other platforms, it's quite straightforward to tweak the software to deal with them:

For example, the default setting should be hiding the self-window instead of showing it, or at least hiding it automatically after a few seconds once users know they are framed properly. Likewise, there can simply be a limit to how large Zoom displays any given head; this problem is simple technologically given they have already figured out how to detect the outline of the head with the virtual background feature.

Other problems can be solved by changing the hardware and office culture. For example, using an external webcam and external keyboard allows more flexibility and control over various seating arrangements. It might help to make audio-only Zoom meetings the default, or to use the old-fashioned telephone as an alternative to wall-to-wall videoconferencing. Exploring these changes is particularly important since it seems likely that working from home will remain an option or perhaps a requirement for many people, even after the current pandemic is brought under control. Now would be a good time to fight the fatigue it so often engenders.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

13 Comments | Leave a Comment..

Posted on Techdirt - 18 February 2021 @ 1:41pm

Indian Government Requires Educational Establishments To Obtain Its Approval For The Subject Matter And Participants Of International Online Conferences And Seminars

from the hardly-an-edifying-sight dept

It would be something of an understatement to say that the COVID-19 pandemic has had a big effect on our lives. One sector where people have had to re-invent themselves is the academic world. Core in-person activities like lectures, seminars, and conferences have been forced to move online. One advantage of a shift to virtual gatherings is that people can participate from around the world. However, for governments, that's less a feature than a bug, since it means they have less control over who is taking part, and what they are saying. In response to this development, the Ministry of Education in India has issued "Revised Guidelines for holding online/virtual Conferences, Seminars, Training, etc." (pdf). An opinion piece in The Indian Express calls it the "biggest attack in the history of independent India on the autonomy of our universities":

When it is fully enforced -- and let there be no doubts over the government's resolve to be iron-handed when it comes to restricting people's democratic rights -- India will find itself in the company of dictatorial regimes around the world that despise liberty of thought and muzzle freedom of expression in their institutions of higher learning.

The new guidelines apply to all publicly funded higher education establishments. The key requirement is for international online conferences and seminars to avoid politically sensitive topics, specifically any related to problems along India's borders. Chief among these are disputes between India and China over borders in the north-east of India, which has recently seen skirmishes between the Indian and Chinese armies, and in Kashmir. Another ban, vague in the extreme, concerns "any other issues which are clearly/purely related to India's internal matters". As well as obtaining approval for the topics of planned online meetings, educational establishments must also submit a list of participants to be vetted. And once an approved conference or seminar has taken place, a link to the event must be provided. As The Indian Express column points out, these new restrictions are likely to hit Indian universities particularly hard:

Unlike their western counterparts, they are severely under-funded. They can neither organise many international conferences, nor send their faculty to participate in such events abroad. The recent boom in webinars has, hence, come as a big boon to them. It saves travel and hospitality costs and also overcomes the hassles of getting visas for invitees from "unfriendly" countries. Moreover, such events can be easily, and more frequently, organised even by institutions in rural and remote areas. Disturbingly, the government wants to curtail these major benefits of the digital revolution to millions of our teachers, students and scientists.

The Indian government's desire to control what is said, and by whom, is likely to harm the spread of knowledge in a country that was just beginning to enjoy one of the few benefits of the pandemic: easier access to international academic gatherings.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

9 Comments | Leave a Comment..

Posted on Techdirt - 10 February 2021 @ 10:51am

Snippet Taxes Not Only Violate The Berne Convention, But Also Betray The Deepest Roots Of Newspaper Culture

from the won't-someone-think-of-the-poor-Rupert-Murdochs? dept

Last week Techdirt wrote about Australia's proposed News Media Bargaining Code. This is much worse than the already awful Article 15 of the EU Copyright Directive (formerly Article 11), which similarly proposes to force Internet companies to pay for the privilege of sending traffic to traditional news sites. A post on Infojustice has a good summary of the ways in which the Australians aim to do more harm to the online world than the Europeans:

1) The protection for press publications provided by [the EU's] DSM Article 15 does not apply to linking or the use of "very short extracts." The Code explicitly applies to linking and the use of extracts of any length. Accordingly, the Code applies to search engines and social media feeds, not just news aggregation services.

2) The Code forces Internet platforms to bargain collectively with news publishers or to be forced into rate setting through binding arbitration. DSM Article 15 does not require any similar rate-setting mechanism.

3) The Code imposes burdensome obligations on the platforms, some of which directly implicate free expression. For example, platforms would need to provide news businesses with ability to "turn off" comments on individual stories they post to digital platforms. DSM Article 15 imposes none of these obligations.

4) The Code prohibits the platforms from differentiating between an Australian news business and a foreign news business. This provision prevents platforms from exiting the market by taking care not to link to Australian news content. If the platform links to international news content, e.g., articles from the New York Times, it must also link to (and therefor pay for) Australian news content. DSM Article 15 does not contain a non-differentiation provision.

The same blog post points out that these elements are so bad they probably violate the Berne Convention -- the foundational text for modern copyright law. Article 10(1) of the Berne Convention provides that:

it shall be permissible to make quotations from a work which already has been lawfully made available to the public, provided that their making is compatible with fair practice, and their extent does not exceed that justified by the purpose, including quotations from newspaper articles and periodicals in the form of press summaries.

Although the Berne Convention doesn't have any mechanism for dealing with violations, Berne obligations are incorporated in the World Trade Organization's Agreement on Trade Related Intellectual Property Rights (TRIPS) and in the Australia-US Free Trade Agreement. Both of those offer dispute resolution that the US could use to challenge the Australian Code if and when it comes into effect. The proposed schemes to force Internet companies to pay even for quoting snippets of news not only violate the Berne Convention: they are also a betrayal of the deepest roots of newspaper culture. That emerges from a fascinating post by Jeff Jarvis, a professor at CUNY's Newmark J-school. He writes:

For about the first century, starting in 1605, newspapers were composed almost entirely of reports copied from mailed newsletters, called avvisi, which publishers promised not to change as they printed excerpts; the value was in the selecting, cutting, and pasting. Before them the avvisi copied each other by hand. These were the first news networks.

In the United States, the Post Office Act of 1792 allowed newspapers to exchange copies in the mail for free with the clear intent of helping them copy and publish each others’ news. In fact, newspapers employed "scissors editors" to compile columns of news from other papers.

In other words, these new snippet taxes are wrong at every level: practical, legal and cultural. And yet gullible lawmakers still want to pass them, apparently to protect defenseless publishers like Rupert Murdoch against the evil lords of the mighty Information Superhighway.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

23 Comments | Leave a Comment..

Posted on Techdirt - 4 February 2021 @ 12:12pm

Microsoft Offers To Break The Web In A Desperate Attempt To Get Somebody To Use Its Widely-Ignored Bing Search Engine

from the opportunistic-much? dept

One of the key battles surrounding the EU Copyright Directive involves the threshold at which upload filters will block the use of copyright material in things like memes and mashups. A year ago, Germany was proposing ridiculously tight restrictions: 128-by-128 pixel images, and three-second videos. Now, it is framing the issue in terms of uses that aren't "automatically" blocked by upload filters. The proposed limits here are 15 seconds of video or audio, 125K graphics, and 160 -- yes, 160 -- characters of text (original in German). Even these tiny extracts could be subsequently blocked by upload filters, depending on the circumstances.

The worsening situation over upload filters has obscured the other bad idea of the EU Copyright Directive: the so-called "link tax", which would require large Internet companies like Google to pay when they use even small amounts of news material. One worrying development in this area is that the idea has spread beyond the EU. As Techdirt reported, Australia is bringing in what amounts to a tax on Google and Facebook for daring to send traffic to legacy news organizations -- notably those of Rupert Murdoch. In July last year, the Australian government released a draft of what is now dubbed the "News Media Bargaining Code". One of the people arguing against the idea is Tim Berners-Lee (pdf):

Requiring a charge for a link on the web blocks an important aspect of the value of web content. To my knowledge, there is no current example of legally requiring payments for links to other content. The ability to link freely -- meaning without limitations regarding the content of the linked site and without monetary fees -- is fundamental to how the web operates, how it has flourished till present, and how it will continue to grow in decades to come.

He concludes: "If this precedent were followed elsewhere it could make the web unworkable around the world." This, indeed, is the danger here: if Australia and the EU go ahead with their plans, it is likely to become the norm globally, with serious consequences for the Internet as a whole.

In response, Google has threatened to pull out of Australia entirely. That's probably just part of its negotiating strategy. In a blog post from a couple of months ago, Mel Silva, VP for Google Australia & New Zealand, wrote: "we strongly believe that with the practical changes we've outlined [in the post], there is a path forward." Similarly, Australian's Prime Minister, Scott Morrison, is now talking of a "constructive" conversation with Google's CEO, Sundar Pichai. But that hasn't stopped Microsoft sensing an opportunity to make life harder for its rival in the online search market. Microsoft's President, Brad Smith, has published the following intervention:

Microsoft fully supports the News Media Bargaining Code. The code reasonably attempts to address the bargaining power imbalance between digital platforms and Australian news businesses. It also recognises the important role search plays, not only to consumers but to the thousands of Australian small businesses that rely on search and advertising technology to fund and support their organisations. While Microsoft is not subject to the legislation currently pending, we'd be willing to live by these rules if the government designates us.

And here's why it "fully supports" this misguided link tax:

Microsoft will ensure that small businesses who wish to transfer their advertising to Bing can do so simply and with no transfer costs. We recognise the important role search advertising plays to the more than two million small businesses in Australia.

We will invest further to ensure Bing is comparable to our competitors and we remind people that they can help, with every search Bing gets better at finding what you are looking for.

That is, in a desperate attempt to get someone to use its still largely-ignored search engine Bing, Microsoft is apparently willing to throw the Web under the bus. It's an incredibly short-sighted and selfish move. Sure, it's legitimate to want to take advantage of a rival's problems. But not to the extent of causing serious harm to the very fabric of the Web, the hyperlink.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

39 Comments | Leave a Comment..

Posted on Techdirt - 29 January 2021 @ 10:44am

The Lies Told About The EU Copyright Directive's Upload Filters May Help Get Them Thrown Out In Court

from the freedom-to-conduct-business dept

Although the main fight over the EU's Copyright Directive was lost back in March 2019, there are plenty of local battles underway. That's a consequence of the fact that an EU Directive has to be implemented by separate national laws in each of the region's 27 member states. Drawing up the local legislation is mostly straightforward, except for the controversial Article 17, which effectively brings in a requirement to filter all uploads. Trying to come up with a text that meets the contradictory obligations of the Directive is proving difficult. For example, although the law is supposed to stop unauthorized uploads, this must not be through "general monitoring", which is not permitted in the EU because of the e-Commerce Directive.

As the various countries struggle to resolve these problems, it is no surprise that they are coming up with very different approaches. These are usefully summed up in a new post on the Kluwer Copyright blog. For example, France is implementing the Copyright Directive by decree, rather than via ordinary legislative procedures. As Techdirt reported, the French government is pushing through an extreme interpretation that ignores requirements for user protections. Germany, by contrast, is bringing in wide-ranging new law that contains a number of positive ideas:

a new "minor use" exception that would legalise minor uses of third party works on online platforms.

In addition, the proposal also introduced the ability for uploaders to "pre-flag" any uploads as legitimate, protecting them from automated blocking.

It limited the scope of the requirement for platforms to obtain licences to "works that users typically upload". Platforms can meet their best efforts obligation to obtain authorisation by approaching collective management organisations and by responding to licence offers from rightsholders with a representative repertoire.

There is an irony here. One of the main reasons for introducing the Copyright Directive was to make copyright law more consistent across the EU. Article 17 is causing copyright law there to diverge even more.

The Kluwer Copyright blog has two more recent posts about Article 17, written by Julia Reda and Joschka Selinger. They look at an aspect of upload filters that could be of crucial importance in the case brought before the Court of Justice of the European Union (CJEU) by Poland, which seeks to have upload filters removed from the Copyright Directive.

On several occasions, the CJEU has thrown out blocking injunctions for violating the service providers' freedom to conduct a business. In a recently published study on behalf of German fundamental rights litigation organization Gesellschaft für Freiheitsrechte e.V., the authors of this blog post argue that when ruling on the request for annulment of Article 17, the CJEU will have to balance all relevant fundamental rights, including the freedom to conduct a business. In this blog post, we will put the spotlight on this under-examined fundamental right. In part 1, we will discuss its relevance for the court case pending before the CJEU. We will examine the ways in which Article 17 places new burdens on online platforms that are fundamentally different from the voluntary copyright enforcement schemes employed by some of the larger platforms today. In part 2, we analyse those new platform obligations in light of the CJEU case law on the freedom to conduct a business and discuss the role of the proportionality mechanism included in Article 17 (5). We find that the legislator may have grossly underestimated the impact of Article 17 on the freedom to conduct a business.

The basic argument is simple. During the debate on the Copyright Directive, its supporters were deeply dishonest about how it would work in practice. They repeatedly claimed that it would not require upload filters, and denied that it would be hard to implement in a way that was compatible with existing EU laws. Unfortunately, the politicians in the European Parliament were taken in by these claims, and passed what became Article 17 without amendments.

But the case before the CJEU gives another chance to point out the truth about upload filters. The fact that they only exist for things like music and video, not all copyrightable material as Article 17 requires; that those don't work well; and that even these flawed systems can only be afforded by Internet giants like Google. In practical terms, this means that smaller companies that allow user uploads will be unable to comply with Article 17, since it would require the use of technology that would be expensive to develop or license, and which wouldn't even work properly. As such, a key argument in the CJEU case will be that upload filters represent an unjustified interference in the freedom to conduct a business in the EU, and should be thrown out. Let's hope the CJEU agrees.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

15 Comments | Leave a Comment..

Posted on Techdirt - 22 January 2021 @ 12:14pm

Turns Out That Brexit Means Rotting Pigs' Heads, And Losing An EU Copyright Exception

from the taking-the-orphans-hostage-again dept

Surprising no one who understands anything about international trade, the UK's departure from the EU -- Brexit -- is proving to be disastrous for its economy. Among the latest victims are Scottish fishermen, who are no longer able to sell their catches to EU customers, and the UK meat industry, which has tons of rotting pigs' heads on its hands. And it turns out that Brexit will be making copyright worse too.

It concerns the slightly obscure area of what are traditionally called "orphan works", although "hostage works" would be a better description. Whatever you call them, they are the millions of older works that are out of print and have no obvious owners, and which remain locked away because of copyright. This has led to various proposals around the world to liberate them, while still protecting the copyright holders if they later appear and assert ownership. One of these proposals became the 2012 EU Directive "on certain permitted uses of orphan works". It created a new copyright exception to allow cultural institutions to digitize written, cinematic or audio-visual works, and sound recordings, and to display them on their Web sites, for non-commercial use only. As Techdirt noted at the time, the Directive was pretty feeble. But even that tiny copyright exception has been taken away in the UK, following Brexit:

The EU orphan works exception will no longer apply to UK-based institutions and will be repealed from UK law from 1 January 2021.

UK institutions may face claims of copyright infringement if they make orphan works available online in the UK or EEA, including works they had placed online before 1 January 2021.

Now, in order to use orphan works in the UK, people must pay a recurring license fee based on the number of works involved. As a result, the British Library has started withdrawing material that it had previously digitized under the EU orphan works directive:

As many of you know, back in 2015 the British Library, working closely with partners at Jisc's Journal Archives platform and with copyright holders, digitised and made freely available the entire run of Spare Rib magazines. We are delighted that this resource, documenting a vibrant and important period of women's activism in the UK, has been so well used by researchers and those interested in the Women's Liberation Movement.

It is therefore with considerable regret that we are confirming that the resource, as a result of the UK leaving the European Union, will no longer be available following the end of the transition period. The decision to close down the Spare Rib resource once the UK leaves the EU was made on the basis of the copyright status of the digitised magazine, which relies heavily on the EU orphan works directive.

Brexit was sold on the basis that it would make things better in the UK. And yet the change to copyright brought about by Brexit turns out to make things worse for scholars and the general public. It seems that pigs' heads are not the only thing rotting thanks to Brexit.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

32 Comments | Leave a Comment..

Posted on Techdirt - 19 January 2021 @ 8:01pm

Free Access To Academic Papers For Everyone In India: Government Proposes 'One Nation, One Subscription' Approach As Part Of Major Shift To Openness

from the open-everything dept

Techdirt has been following the important copyright case in India that is about how people in that country can access academic journals. Currently, many turn to "shadow libraries" like Sci-Hub and Libgen, because they cannot afford the often hefty frees that academic publishers charge to access papers. If a new "Science, Technology, and Innovation Policy" (pdf), just released as a draft by the Government of India, comes to fruition, people may not need to:

The Government of India will negotiate with journal publishers for a "one nation, one subscription" policy whereby, in return for one centrally-negotiated payment, all individuals in India will have access to journal articles. This will replace individual institutional journal subscriptions.

That's just one of the bold ideas contained in the 63-page document. Here's another: open access to all research funded by the Indian taxpayers.

Full text of final accepted author versions of manuscripts (postprints and optionally preprints) along with supplementary materials, which are the result of public funding or performed in publicly funded institutions, or were performed using infrastructure built with the support of public funds will be deposited, immediately upon acceptance, to an institutional repository or central repository.

Similarly, all data generated from publicly funded research will be released as open data, with a few exceptions:

All data used in and generated from public-funded research will be available to everyone (larger scientific community and public) under FAIR (findable, accessible, interoperable and reusable) terms. Wherever applicable, exceptions will be made on grounds of privacy, national security and Intellectual Property Rights (IPR). Even in such situations, suitably anonymised and/or redacted data will be made available. In all cases, where the data cannot be released to the general public, there will be a mechanism to release it to bonafide/authorised researchers.

All publicly funded scientific resources will be made shareable and accessible nationally through digital platforms, including laboratories, supercomputing and AI facilities. Publicly funded open educational resources will be made available under a "minimally restrictive" open content license. Libraries at publicly funded institutions will be accessible to everyone, subject only to "reasonable security protocols".

Another idea is the creation of a dedicated portal (remember those?), the Indian Science and Technology Archive of Research, which will provide access to all publicly funded research, including manuscripts, research data, supplementary information, research protocols, review articles, conference proceedings, monographs, book chapters, etc. There will also be a national science, technology and innovation "observatory", which will establish data repositories and a computational grid, among other things.

It's an incredibly ambitious program, with an ambitious goal: "To achieve technological self-reliance and position India among the top three scientific superpowers in the decade to come." The other two superpowers being the US and China, presumably. Whether that program is implemented, wholly or even just in part, is another matter, and will depend on the lobbying that will now inevitably take place, and the usual budgetary constraints. But it is certainly impressive in the completeness of its vision, and in its commitment to openness and sharing in all its forms.

Comments on the proposals can be sent to until Monday, 25 January, 2021.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

4 Comments | Leave a Comment..

Posted on Techdirt - 12 January 2021 @ 3:21am

Twitter Bans Sci-Hub's Account Because Of 'Counterfeit Goods' Policy, As Indian Copyright Case Heats Up

from the plainly-wrong dept

A couple of weeks ago, Techdirt wrote about an important copyright case in India, where a group of academic publishers is seeking a dynamic injunction to block access to the "shadow libraries" Sci-Hub and Libgen. The person behind Sci-Hub, Alexandra Elbakyan, has written to Techdirt with an update on the situation:

Sci-Hub account with 180K subscribers with almost everyone supporting it got BANNED on Twitter due to "counterfeit goods" policy. It existed for 9 years and it was frozen once, but I resolved it by uploading my passport scan. But now it is banned without the possibility to restore it, as Twitter support replied! And it happened right after Indian scientists revolted against Elsevier and other academic publishers, after Sci-Hub posted on Twitter about danger of being blocked - thousands of people spoke up against this on Twitter.

Now Twitter said to all of them, SHUT UP!

Although it's impossible at this stage to say whether Sci-Hub's Twitter account was closed as a direct result of action by Elsevier and other publishers, it is certainly true that the Indian copyright case has blown up into a major battle. The widely respected Indian SpicyIP site has several posts on the important legal and constitutional issues raised by the legal action. One of these concludes:

It can only be hoped that the court factors in the different considerations of a developing nation like India as against the developed nations where the defendant websites have presently been blocked, for it will have a massive impact on the research potential of the country.

While another goes further, and insists: "The ongoing litigation, therefore, must, on constitutional grounds if not copyright-related grounds, be decided in the favour of the defendants." Further support for Sci-Hub and Libgen has come from 19 senior Indian scientists and three organizations, and the Delhi High Court has agreed to allow them to intervene, as pointed out by TorrentFreak. In their application, the scientists wrote:

copyright is not merely a matter of private interests but an issue that deeply concerns public interest especially when it comes to access to learning materials... If the two websites are blocked it will effectively be killing the lifeline of research and learning in higher education in the country.

An organization called the Breakthrough Science Society has created a petition in favor of the defendants. The petition's statement says:

International publishers like Elsevier have created a business model where they treat knowledge created by academic research funded by taxpayers' money as their private property. Those who produce this knowledge -- the authors and reviewers of research papers -- are not paid and yet these publishers make windfall profit of billions of dollars by selling subscriptions to libraries worldwide at exorbitantly inflated rates which most institutional libraries in India, and even developed countries, cannot afford. Without a subscription, a researcher has to pay between $30 and $50 to download each paper, which most individual Indian researchers cannot afford. Instead of facilitating the flow of research information, these companies are throttling it.

Alexandra Elbakyan of Kazakhstan has taken an effective and widely welcomed step by making research papers, book chapters and similar research-related information freely available through her website Sci-Hub. Libgen (Library Genesis) renders a similar service. We support their initiative which, we contend, does not violate any norm of ethics or intellectual property rights as the research papers are actually intellectual products of the authors and the institutions.

As these comments from academics make clear, the stakes are high in the current legal action against Sci-Hub and Libgen. Against that background, shutting down Sci-Hub's Twitter account is ridiculous, since it is purely informational, and served as a valuable forum for discussing important copyright issues, including the Indian court case. Whatever you might think of the company's decision to suspend certain other accounts, this one is plainly wrong.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

12 Comments | Leave a Comment..

Posted on Techdirt - 8 January 2021 @ 1:30pm

Identifying Insurrectionists Is Going To Be Easy -- Thanks To Social Media And All The Other Online Trails People Leave

from the not-going-dark dept

As Techdirt readers know, there's a lot of hatred for social media in some circles, and lots of lies being told about why Section 230 is to blame. Against that background, it's useful to remember that, as their name implies, they are just media -- things in the middle of people communicating to others. As such, they are neither good nor bad, but tools that can be used for both. In addition, social media posts themselves can be used in good and bad ways. Examples of the former include the Bellingcat investigations that frequently analyze social media to tease out information about major events that is otherwise hard to obtain. Sometimes, the information is so easy to find, you don't even need any special skills. An article on Ars Technica points out that identifying the leading insurrectionists who participated in the recent events at the US Capitol is going to be pretty straightforward, thanks to social media:

the DC Metropolitan Police and the FBI will probably need to look no further than a cursory Google search to identify many of the leaders of Wednesday's insurrection, as many of them took to social media both before and after the event to brag about it in detail.

Things are made much easier because many of those taking part in the rioting did not wear masks, despite requirements to do so in some locations. As a result, the authorities have thousands of really clear pictures of the insurrectionists' faces. In addition, Witness, an organization that "helps people use video and technology to protect and defend human rights", was encouraging people to save livestreams of the riots, and to share them with "investigating organizations like Bellingcat". The Ars Technica article notes:

Neither would an agency need actual photos or footage to track down any mob participant who was carrying a mobile phone. Law enforcement agencies have also developed a habit in recent years of using so-called geofence warrants to compel companies such as Google to provide lists of all mobile devices that appeared within a certain geographic area during a given time frame.

This underlines a fact that law enforcement doesn't like to talk about: far from things "going dark", there is more useful data that can be used to identify and convict people than ever before. In this case, it could perhaps also have been used to prevent the violence, since far-right supporters openly discussed their plans online beforehand. But it wasn't -- we don't know why. This plethora of readily-available information is another reason why backdooring encryption is not just foolish, but completely unnecessary. Today, there are so many other sources of key information -- not least the much-maligned social media.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

80 Comments | Leave a Comment..

Posted on Techdirt - 4 January 2021 @ 8:22pm

Seven Years Ago, CERN Gave Open Access A Huge Boost; Now It's Doing The Same For Open Data

from the tim-berners-lee-would-be-proud dept

Techdirt readers will be very familiar with CERN, the European Council for Nuclear Research (the acronym comes from the French version: Conseil Européen pour la Recherche Nucléaire). It's best known for two things: being the birthplace of the World Wide Web, and home to the Large Hadron Collider (LHC), the world's largest and most powerful particle accelerator. Over 12,000 scientists of 110 nationalities, from institutes in more than 70 countries, work at CERN. Between them, they produce a huge quantity of scientific papers. That made CERN's decision in 2013 to release nearly all of its published articles as open access one of the most important milestones in the field of academic publishing. Since 2014, CERN has published 40,000 open access articles. But as Techdirt has noted, open access is just the start. As well as the final reports on academic work, what is also needed is the underlying data. Making that data freely available allows others to check the analysis, and to use it for further investigation -- for example, by combining it with data from elsewhere. The push for open data has been underway for a while, and has just received a big boost from CERN:

The four main LHC collaborations (ALICE, ATLAS, CMS and LHCb) have unanimously endorsed a new open data policy for scientific experiments at the Large Hadron Collider (LHC), which was presented to the CERN Council today. The policy commits to publicly releasing so-called level 3 scientific data, the type required to make scientific studies, collected by the LHC experiments. Data will start to be released approximately five years after collection, and the aim is for the full dataset to be publicly available by the close of the experiment concerned. The policy addresses the growing movement of open science, which aims to make scientific research more reproducible, accessible, and collaborative.

The level 3 data released can contribute to scientific research in particle physics, as well as research in the field of scientific computing, for example to improve reconstruction or analysis methods based on machine learning techniques, an approach that requires rich data sets for training and validation.

CERN's open data portal already contains 2 petabytes of data -- a figure that is likely to rise rapidly, since LHR experiments typically generate massive quantities of data. However, the raw data will not in general be released. The open data policy document (pdf) explains why:

This is due to the complexity of the data, metadata and software, the required knowledge of the detector itself and the methods of reconstruction, the extensive computing resources necessary and the access issues for the enormous volume of data stored in archival media. It should be noted that, for these reasons, general direct access to the raw data is not even available to individuals within the collaboration, and that instead the production of reconstructed data (i.e. Level-3 data) is performed centrally. Access to representative subsets of raw data -- useful for example for studies in the machine learning domain and beyond -- can be released together with Level-3 formats, at the discretion of each experiment.

There will also be Level 2 data, "provided in simplified, portable and self-contained formats suitable for educational and public understanding purposes". CERN says that it may create "lightweight" environments to allow such data to be explored more easily. Virtual computing environments for the Level 3 data will be made available to aid the re-use of this primary research material. Although the data is being released using a Creative Commons CC0 waiver, acknowledgements of the data's origin are required, and any new publications that result must be clearly distinguishable from those written by the original CERN teams.

As with the move to open access in 2013, the new open data policy is unlikely to have much of a direct impact for people outside the high energy physics community. But it does represent an extremely strong and important signal that CERN believes open data must and will become the norm.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

2 Comments | Leave a Comment..

Posted on Techdirt - 28 December 2020 @ 7:51pm

Elsevier Wants To Stop Indian Medics, Students And Academics Accessing Knowledge The Only Way Most Of Them Can Afford: Via Sci-Hub And Libgen

from the copyright-is-not-an-inevitable,-divine,-or-natural-right dept

Last month Techdirt wrote about some ridiculous scaremongering from Elsevier against Sci-Hub, which the publisher claimed was a "security risk". Sci-Hub, with its 85 million academic papers, is an example of what are sometimes termed "shadow libraries". For many people around the world, especially in developing countries, such shadow libraries are very often the only way medics, students and academics can access journals whose elevated Western-level subscription prices are simply unaffordable for them. That fact makes a new attack by Elsevier, Wiley and the American Chemical Society against Sci-Hub and the similar Libgen shadow library particularly troubling. The Indian title The Wire has the details:

the publishing giants are demanding that Sci-Hub and Libgen be completely blocked in India through a so-called dynamic injunction. The publishers claim that they own exclusive rights to the manuscripts they have published, and that Sci-Hub and Libgen are engaged in violating various exclusive rights conferred on them under copyright law by providing free access to their copyrighted contents.

Techdirt readers will note the outrageous claim there: that these publishers "own exclusive rights to the manuscripts they have published". That's only true in the sense that most publishers force academics to hand over the copyright as a condition of being published. The publishers don't pay for that copyright, and contribute almost nothing to the final published paper save a little editing and formatting: manuscript review is carried out for free by other academics. And yet the publishers are demanding that Sci-Hub and Libgen should be blocked in India on this basis. Moreover, they want a "dynamic injunction":

That is, once a defendant's website is categorised as a "rogue website", the plaintiff won't have to go back to the judges to have any new domains blocked for sharing the same materials, and can simply get the injunction order extended with a request to the court's deputy registrar.

The legal action by publishers against shadow libraries is part of a broader offensive around the world, but there's a reason why they may face extra challenges in India -- over and above the fact that Sci-Hub and Libgen contain huge quantities of material that can unambiguously be shared quite legally. As Techdirt reported back in 2013, a group of Western publishers sued Delhi University over photocopied versions of academic textbooks. For many students in India, this was the only way they could afford such educational materials. In 2016, the Indian court ruled that "copyright is not an inevitable, divine, or natural right", and that photocopying textbooks is fair use.

The parallels with the new suit against Sci-Hub and Libgen are clear. The latter are digital photocopy sites: they make available copies of educational material to students and researchers who could not otherwise afford access to this knowledge. The copies made by Sci-Hub and Libgen should be seen for what they are: fair use of material that was in any case largely created using public funds for the betterment of humanity, not to boost the bottom line of publishers with profit margins of 35-40%.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

10 Comments | Leave a Comment..

Posted on Techdirt - 22 December 2020 @ 1:46pm

Czech Search Engine Seznam Joins In the 'Let's Sue Google' Fun, Seeks $417 Million in Damages

from the good-luck-with-that dept

It seems that people have decided that now is a good time to attack Google in various ways. In October, the US Justice Department sued Google for allegedly violating antitrust laws. This month, ten US states sued Google, alleging anticompetitive behavior, followed by another 38 states, alleging that the company has created an illegal monopoly in online search and advertising. In November, 165 companies and industry bodies sent a letter to the EU complaining about Google and asking for tougher antitrust action. The EU has also released first drafts of its new Digital Services Act, and Digital Markets Act. One of the key elements of the new laws is tackling the power of leading online platforms like Google.

The EU has already taken steps towards that end. Back in 2018, the EU fined Google €4.34 billion for breaching antitrust rules. As part of its compliance with the EU's demands, Google introduced a process whereby other search engines can bid to appear on a "choice screen", which lets Android users pick the default search engine when they set up their smartphone. However, some rival search engines, like DuckDuckGo, were unhappy with the approach. At the end of October, DuckDuckGo, along with Ecosia, Lilo, Qwant and Seznam -- search engines from Germany, France, France and the Czech Republic, respectively -- sent an open letter to the European Commission on the subject:

We are companies operating search engines that compete against Google. As you know, we are deeply dissatisfied with the so-called remedy created by Google to address the adverse effects of its anticompetitive conduct in the Android case. We understand that Google regularly updates you regarding its pay-to-play auction, but it appears that you may not be receiving complete or accurate information.

We are writing to request a trilateral meeting with your office, ourselves, and Google, with the goal of establishing an effective preference menu. Our respective designees could work in advance to create a tight agenda for this meeting to ensure it is productive and collaborative.

Now one of those search engines -- Seznam -- has gone even further, reported here by Reuters:, the Czech Republic's leading home-grown web search platform, said on Thursday it had claimed 9.072 billion crowns ($417 million) in damages from Google, alleging that the U.S. giant restricted competition.

What makes this move noteworthy is that Seznam bases its claim on the fact that the EU has already determined that Google had breached EU rules in this area. The complaint concerns the period 2011 to 2018, before the EU forced Google to adopt the choice screen. Seznam's deputy chairman Pavel Zima, is quoted as saying: "we claim the compensation of damage that we have suffered while trying to distribute our applications and services via mobile devices with Android operation system". According to Reuters, Seznam has sent the claim to Google with a 30-day deadline, and says that it is prepared to take civil legal action if necessary. We'll see if it does, and how that works out.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

6 Comments | Leave a Comment..

Posted on Techdirt - 17 December 2020 @ 12:27pm

Secret Agents Implicated In The Poisoning Of Opposition Leader Alexey Navalny Identified Thanks To Russia's Black Market In Everybody's Personal Data

from the poor-data-protection-is-bad-for-Vlad dept

Back in August, the Russian opposition leader Alexei Navalny was poisoned on a flight to Moscow. Despite initial doubts -- and the usual denials by the Russian government that Vladimir Putin was involved -- everyone assumed it had been carried out by the country's FSB, successor to the KGB. Remarkable work by the open source intelligence site Bellingcat, which Techdirt first wrote about in 2014, has now established beyond reasonable doubt that FSB agents were involved:

A joint investigation between Bellingcat and The Insider, in cooperation with Der Spiegel and CNN, has discovered voluminous telecom and travel data that implicates Russia's Federal Security Service (FSB) in the poisoning of the prominent Russian opposition politician Alexey Navalny. Moreover, the August 2020 poisoning in the Siberian city of Tomsk appears to have happened after years of surveillance, which began in 2017 shortly after Navalny first announced his intention to run for president of Russia.

That's hardly a surprise. Perhaps more interesting for Techdirt readers is the story of how Bellingcat pieced together the evidence implicating Russian agents. The starting point was finding passengers who booked similar flights to those that Navalny took as he moved around Russia, usually earlier ones to ensure they arrived in time but without making their shadowing too obvious. Once Bellingcat had found some names that kept cropping up too often to be a coincidence, the researchers were able to draw on a unique feature of the Russian online world:

Due to porous data protection measures in Russia, it only takes some creative Googling (or Yandexing) and a few hundred euros worth of cryptocurrency to be fed through an automated payment platform, not much different than Amazon or Lexis Nexis, to acquire telephone records with geolocation data, passenger manifests, and residential data. For the records contained within multi-gigabyte database files that are not already floating around the internet via torrent networks, there is a thriving black market to buy and sell data. The humans who manually fetch this data are often low-level employees at banks, telephone companies, and police departments. Often, these data merchants providing data to resellers or direct to customers are caught and face criminal charges. For other batches of records, there are automated services either within websites or through bots on the Telegram messaging service that entirely circumvent the necessity of a human conduit to provide sensitive personal data.

The process of using these leaked resources to establish the other agents involved in the surveillance and poisoning of Navalny, and their real identities, since they naturally used false names when booking planes and cars, is discussed in fascinating detail on the Bellingcat site. But the larger point here is that strong privacy protections are good not just for citizens, but for governments too. As the Bellingcat researchers put it:

While there are obvious and terrifying privacy implications from this data market, it is clear how this environment of petty corruption and loose government enforcement can be turned against Russia's security service officers.

As well as providing Navalny with confirmation that the Russian government at the highest levels was probably behind his near-fatal poisoning, this latest Bellingcat analysis also achieves something else that is hugely important. It has given privacy advocates a really powerful argument for why governments -- even the most retrogressive and oppressive -- should be passing laws to protect the personal data of every citizen effectively. Because if they don't, clever people like Bellingcat will be able to draw on the black market resources that inevitably spring up, to reveal lots of things those in power really don't want exposed.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

9 Comments | Leave a Comment..

Posted on Techdirt - 9 December 2020 @ 10:51am

German Court Orders Encrypted Email Service Tutanota To Backdoor One Account

from the end-to-end-crypto-is-still-your-friend dept

A legal requirement to add backdoors to encrypted systems for "lawful access" has been discussed for many years. Last month, the EU became the latest to insist that tech companies should just nerd harder to reconcile the contradictory demands of access and security. That's still just a proposal, albeit a dangerous one, since it comes from the EU Council of Ministers, one of the region's more powerful bodies. However, a court in Germany has decided it doesn't need to wait for EU legislation, and has ordered the encrypted Web-email company Tutanota to insert a backdoor into its service (original in German). The order, from a court in Cologne, is surprising, because it contradicts an earlier decision by the court in Hanover, capital of the German state of Lower Saxony, and Tutanota's home town. The Hanover court based its ruling on a judgment by the Court of Justice of the European Union (CJEU), the EU's highest court. In 2019, the CJEU said that:

a web-based email service which does not itself provide internet access, such as the Gmail service provided by Google, does not consist wholly or mainly in the conveyance of signals on electronic communications networks and therefore does not constitute an 'electronic communications service'

Despite this, in the Tutanota case the Cologne court applied a German law for telecoms. Tutanota's co-founder Matthias Pfau explained to TechCrunch:

"The argumentation is as follows: Although we are no longer a provider of telecommunications services, we would be involved in providing telecommunications services and must therefore still enable telecommunications and traffic data collection," he told TechCrunch.

"From our point of view -- and law German law experts agree with us -- this is absurd. Neither does the court state what telecommunications service we are involved in nor do they name the actual provider of the telecommunications service."

Given that ridiculous logic, it's no surprise that Tutanota will be appealing to Germany's Federal Court of Justice. But in the meantime the company must comply with the court order by developing a special surveillance capability. Importantly, it only concerns one account -- allegedly involved in an extortion attempt -- that seems to be no longer in use. Moreover, as the TechCrunch article explains, the monitoring function will apply to future emails that the account receives. And even then, it will only deliver any unencrypted emails that are present, because Tutanota is not able to decrypt users' emails that apply end-to-end encryption, which is entirely under the user's control, not Tutanota's.

That means the practical effect of this court order is extremely limited: to future unencrypted emails of just one quiescent account. But independently of its real-life usefulness, this order sets a terrible precedent of a court ordering an Internet company to insert what amounts to a backdoor in an account. That's why it is vital that Tutanota's appeal prevails -- for both the company, and for the EU Internet as a whole.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

16 Comments | Leave a Comment..

Posted on Techdirt - 3 December 2020 @ 3:16am

576 German Artists Want EU Copyright Directive Made Worse, With No Exceptions For Memes Or Mashups

from the promise?-what-promise? dept

When the EU Copyright Directive was being drawn up, one of the main battlegrounds concerned memes. The fear was that the upload filters brought in by the new law would not be able to distinguish between legal use of copyright material for things like memes, quotation, criticism, review, caricature, parody and pastiche, and illegal infringements. Supporters of the Directive insisted that memes and such-like would be allowed, and that it was simply scaremongering to suggest otherwise. When the Directive was passed, BBC News even ran a story with the headline "Memes exempt as EU backs controversial copyright law". The MEP Mary Honeyball is quoted as saying: "There's no problem with memes at all. This directive was never intended to stop memes and mashups."

But just as supporters insisted that upload filters would not be obligatory -- and then afterwards changed their story, admitting they were the only way to implement the new law -- so people who insisted that memes and parodies would still be allowed are now demanding that they should be banned. Copyright companies were the first to make that shift, and now a group of 576 German artists have sent a letter to the German government and politicians complaining about the proposed implementation of the Copyright Directive in their country (original in German). In particular, they are appalled by:

the introduction of all kinds of exceptions, some of which are so outrageously contrary to European law, that we can only shake our heads: up to 20 seconds of music, remixes, mash-ups, samples etc. -- everything should be freely usable, without a license.

In other words, precisely the things that supporters of the EU Copyright Directive promised absolutely would be freely usable, without a license, when experts warned that the new legislation could threaten these legal activities. Now these artists are demanding that the German government ignore all those assurances that user rights would indeed be preserved.

However, as Heise Online reports, not all German artists are so selfish in their desire to take away what few rights ordinary members of the public have in the use of copyright material for memes, remixes and the like. A group of 48 top German artists using social media to great effect, and who together have around 88 million followers on YouTube, Instagram, Twitter, Twitch and TikTok, take a very different view of the German government's proposed implementation (original in German):

Article 3 paragraph 6 describes the public reproduction of a tiny excerpt of works protected by copyright and parts of works by the user of a service provider, for non-commercial purposes or where insignificant income is involved. In these circumstances, thanks to Article 3 Paragraph 6 it would be legal to use up to 20 seconds of a film, up to 20 seconds of a sound track, up to 1,000 characters of text and a picture of up to 250 kilobytes without having to purchase a license, since the rightsholders are compensated for the usage via the service provider. We content creators expressly support this rule.

This so-called "legalization of memes" shows that the politics of [the German government] is close to how reality operates. What defines our culture is always evolving, also through digitization. Memes have been part of our culture for many years and are finally recognized by this ministerial draft.

The statement from the 48 social media artists also includes a neat encapsulation of why their position is so different from the 576 artists whining about memes and mashups:

we would like to point out that content creators are simultaneously users and owners of copyrights, i.e. [they are both] creatives and companies in the cultural industry.

The 576 artists who wish to deny an Internet user the right to draw on copyright material for memes, parodies, mashups etc. forget that they too draw constantly on the works of others as they create -- sometimes explicitly, sometimes more subtly. To cast themselves as some kind of creative priesthood that should be granted special privileges not available to everyone else is not just unfair, but insulting and short-sighted.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

32 Comments | Leave a Comment..

Posted on Techdirt - 25 November 2020 @ 7:39pm

Good News: Academics Can Make Their Articles Published In Top Journal Nature Freely Available As Open Access. Bad News: They Must Pay $11,000 For Each One

from the free-but-not-free dept

Two years ago, Techdirt wrote about Plan S, an initiative from top research funders that requires all work they support to be published as open access. It's one of the most important moves to get publicly-funded work made freely available, and as such has been widely welcomed. Except by publishers, of course, who have enjoyed profit margins of 35-40% under the current system, which sees libraries and others pay for subscriptions in order to read public research. But Plan S is too big to ignore, not least after the powerful Bill & Melinda Gates Foundation joined the coalition behind it. So publishers have instead come up with ways to subvert the whole idea of making knowledge freely available in order to maintain profits. The latest and perhaps most blatant example of this has come from Springer Nature, the publisher of the journal Nature, widely regarded as one of the top two science titles in the world (the other being Science). Here's what Nature the publisher is doing, reported by Nature the journal:

From 2021, the publisher will charge €9,500, US$11,390 or £8,290 to make a paper open access (OA) in Nature and 32 other journals that currently keep most of their articles behind paywalls and are financed by subscriptions. It is also trialing a scheme that would halve that price for some journals, under a common-review system that might guide papers to a number of titles.

OA advocates are pleased that the publisher has found ways to offer open access to all authors, which it first committed to in April. But they are concerned about the price. The development is a "very significant" moment in the movement to make scientific articles free for all to read, but "it looks very expensive," says Stephen Curry, a structural biologist at Imperial College London.

The research will indeed by freely available to the world, but the authors' institutions have to cough up the massive sum of $11,000 for every article. That will make Nature compliant with Plan S, while ensuring that loads of money continues to roll in. It also means that educational institutions won't be saving any money when their researchers can read some Nature publishing papers for free, since they must pay out huge sums for their own academics to appear in these titles. This is a classic example of double-dipping -- what is more politely called "hybrid open access." Nature the publisher will get paid by institutions to make some articles freely available, but it will continue to be paid by subscribers to access material that has already been paid for. Plan S may mean that Nature and other publishers make even more money.

That's problematic, because more money for Nature and other journals means more money that the academic world has to pay as whole. One of the big hopes was that open access would not only provide free access to all publicly-funded research, but that the overall cost to institutions would come down dramatically. If they don't, then researchers in poorer countries are unlikely to be able to publish their work in leading journals, because their universities can't afford charges of $11,000 per article. Waiver schemes exist in some cases, but are unsatisfactory, because they effectively require researchers to beg for charity -- hardly what global access to knowledge is supposed to bring about.

At the heart of the problem lies the issue of a title's supposed prestige. Nature can probably get away with charging its extremely high open access rate because researchers are so keen to appear in it for the sake of their careers:

Peter Suber, director of the Harvard Office for Scholarly Communication in Cambridge, Massachusetts, says it is a "prestige tax", because it will pay for the journals' high rejection rates, but will not, in his opinion, guarantee higher quality or discoverability. "I think it would be absurd for any funder, university or author to pay it," he says.

A possible solution is to move to a publishing system based around preprints, which have proved invaluable during the COVID-19 pandemic as a way of getting important research out fast. With this approach, the issue of prestige is irrelevant, since papers are simply placed online directly, for anyone to access freely. That's going to be a hard transition. Not because there are deep problems with the idea, but because academics prefer to appear in journals like Nature and Science. Open access won't succeed until they realize that this is not just selfish but also ultimately harmful to their own academic work, which becomes warped by the perceived need to publish in prominent titles.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

26 Comments | Leave a Comment..

Posted on Techdirt - 20 November 2020 @ 3:25am

Poland's Bid To Get Upload Filters Taken Out Of The EU Copyright Directive Suddenly Looks Much More Hopeful

from the biter-bit dept

As readers of Techdirt will remember, one of the biggest defeats for users of the Internet -- and for online freedom of expression -- was the passage of the EU Copyright Directive last year. The law was passed using a fundamentally dishonest argument that it did not require upload filters, because they weren't explicitly mentioned in the text. As a result, supporters of the legislation claimed, platforms would be free to use other technologies that did not threaten freedom of speech in the way that automated upload filters would do. However, as soon as the law was passed, countries like France said that the only way to implement Article 17 (originally Article 13) was through upload filters, and copyright companies started pushing for legal memes to be blocked because they now admitted that upload filters were "practically unworkable".

This dishonesty may come back to bite supporters of the law. Techdirt reported last August that Poland submitted a formal request for upload filters to be removed from the final text. The EU's top court, the Court of Justice of the European Union (CJEU) has just held a public hearing on this case, and as the detailed report by Paul Keller makes abundantly clear, there are lots of reason to be hopeful that Article 17's upload filters are in trouble from a legal point of view.

The hearing was structured around four questions. Principally, the CJEU wanted to know whether Article 17 meant that upload filters were mandatory. This is a crucial question because the court has found in the past that a general obligation to monitor all user uploads for illegal activities violates the fundamental rights of Internet users and platform operators. This is why proponents of the law insisted that upload filters were not mandatory, but simply one technology that could be applied. In her commentary on the public hearing, the former Pirate Party MEP Julia Reda summarizes Poland's answer to the CJEU's question as follows:

Poland argued that while Article 17 does not explicitly prescribe the use of upload filters, no alternatives are available to platforms to fulfil their obligations under Article 17(4). In a similar vein, [CJEU] Advocate General Saugmandsgaard Øe asked Parliament and Council whether a person who is required to travel from Luxembourg to Brussels within two hours can really be considered to have a choice between driving and walking. Poland also correctly pointed out that the alternatives presented by the European institutions, such as fingerprinting, hashing, watermarking, Artificial Intelligence or keyword search, all constitute alternative methods of filtering, but not alternatives to filtering.

This is the point that every expert has been making for years: there are no viable alternatives to upload filters, which means that Article 17 necessarily imposes a general monitoring requirement, something that is not permitted under current EU law. The fact that the Advocate General Øe, who will release his own recommendations on the case early next year, made his comment about the lack of any practical alternative to upload filters is highly significant. During the hearing, representatives of the French and Spanish governments claimed that this doesn't matter, for the following remarkable reason:

The right to intellectual property should be prioritized over freedom of expression in cases of uncertainty over the legality of user uploads, because the economic damage to copyright-holders from leaving infringements online even for a short period of time would outweigh the damage to freedom of expression of users whose legal uploads may get blocked.

The argument here seems to be that as soon as even a single illegal copy is placed online, it will be copied rapidly and spread around the Internet. But this line of reasoning undermines itself. If placing a single illegal copy online for even a short time really is enough for it to be shared widely, then it only requires a copy to be placed on a site outside the EU's reach for copies to spread around the entire Internet anyway -- because copying is so easy -- which makes the speed of the takedown within the EU irrelevant. As Reda emphasizes, the balance of different competing rights is going to be central to the CJEU's considerations, and it is not just freedom of expression that Article 17 threatens:

Aside from the intellectual property of rightsholders and the freedom of expression and information of users, other fundamental rights need to be taken into account, most notably the freedom to conduct a business of platform operators. Article 17 leaves few guarantees for the freedom to conduct a business, by introducing extremely broad obligations on platform operators, while only limiting those obligations through a general reference to the principle of proportionality.

That is, it's not just online users who will suffer because of the unavoidable use of upload filters, businesses will too. And there's another, more subtle issue, that the CJEU will consider, which is whether the EU lawmakers have done a good enough job in terms of establishing the necessary minimum safeguards against the the violation of fundamental rights by upload filters:

While the proponents of Article 17 considered that the safeguards included in it are sufficient, such as the complaint and redress mechanism and the obligation to leave legitimate uses unaffected, Poland argued that the EU legislator had deliberately passed on these difficult questions to the national legislators and ultimately the platforms, in an effort to sidestep politically sensitive issues.

The other trick used by supporters of the Copyright Directive to get it approved -- leaving to national governments and individual companies the impossible task of reconciling upload filters with freedom of expression -- may also count against Article 17 when the CJEU rules on whether it is valid. Moreover:

The case before the Court has far-reaching implications beyond the realm of copyright law, as similar sector-specific legislation is being considered in other areas, and the European Commission is in the process of drafting horizontal legislation on content moderation.

In other words, what seemed at the time like a desperate last attempt by Poland to stop the awful upload filters, with little hope of succeeding, now looks to have a decent chance because of the important general issues it raises -- something explored at greater length in a new study written by Reda and others (pdf). That's not to say that Article 17's upload filters are dead, but it seems like the underhand methods used to force this legislation through could turn out to be their downfall.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

9 Comments | Leave a Comment..

Posted on Techdirt - 17 November 2020 @ 3:32am

Japan-UK Trade Deal Shows How Controversial Digital Policies Can Be Slipped Through With Little Scrutiny Or Resistance

from the welcome-to-the-data-washing-hub dept

Techdirt has been writing about trade agreements for many years. The reason is simple: as digital technology permeates ever more aspects of modern life, so international trade deals reflect this by including sections that have an important impact on the online world. A new trade agreement between Japan and the UK (pdf) is a good example. It is essentially a copy of the earlier trade deal between the EU and Japan (pdf) -- because of Brexit, UK negotiators have not had the time or resources to draw up their own independent text, which typically requires years of drafting and negotiation. But significantly, the Japan-UK agreement adds several major sections purely about digital matters. All are terrible for the general public, as a briefing document from the UK-based Open Rights Group explains.

One issue concerns transfers of personal data between the UK and Japan. In the EU, this is governed by the well-known and relatively stringent GDPR. In fact, in order to achieve "adequacy" -- essentially, legal permission to receive EU personal data -- Japan has had to strengthen its data protection laws:

The EU required Japan to change its data protection regime, including supplementary rules on onwards transfers of EU data to other countries. The European Parliament has expressed further concerns and Japan is considering further changes.

However, the Japan-UK trade deal explicitly calls for personal data flows to be made easier, with a consequent watering-down of EU-level protections:

The UK deal includes measures which ban restrictions on the free flow of personal data, restrictions which could clash with the limits which European data protection laws place on international transfers. The EU could not adopt these measures in its treaty with Japan, and instead put in placeholder text committing both parties to review the situation in three years. The UK-Japan text heeds the wording of the USMCA [United States-Mexico-Canada Free Trade Agreement], with some extra clauses to exclude procurement and data kept on government orders.

The banning of restrictions on the free flow of personal data allows for public interest policies, following a standard formulation in trade agreements. This regime looks reasonable on paper, but in practice, it is very difficult to implement public interest policies which clash with trade liberalisation, as they are open to legal challenge, which rarely find in favour of restrictions.

Open Rights Group notes two important consequences of this decision not to follow the EU text here. First, it potentially allows personal data flowing to the UK from third countries to be passed to Japan and then on to other jurisdictions, for example the US, with almost no controls or restrictions. This would turn the UK into a "data washing hub" as the Open Rights Group puts it. In this respect, the UK-Japan deal contrasts with the EU-Japan deal, where:

The EU specifically excluded data flows from their trade agreement with Japan. Although Japan has an adequacy decision from the EU, it had to put specific arrangements in place for EU data to stay in Japan.

Given that risk, it is highly likely that the EU will refuse to grant adequacy to the UK, in order to prevent the personal data of EU citizens being sent via the UK to Japan or the US without adequate protections. The crucial role of personal data in modern business means a failure to gain adequacy will have a hugely negative impact on EU-UK trade. The other significant addition to the text in the digital sphere concerns "technical protection mechanisms" (TPMs) -- DRM and similar technologies -- and the criminalization of circumventing them. Open Rights Group points out:

Another concern for digital rights in the UK is the potential criminalisation of circumvention outside commercial endeavours, affecting ordinary people. Circumvention is not a niche activity. Millions of people used to make backup copies of DVDs and many today have to bypass technological protections in order to convert their protected ebooks to other formats. The provisions in the USMCA will require Mexico to beef up its anti- circumvention laws, including introducing criminal penalties for "commercial advantage or private gain".

Finally, there is an interesting section dealing with cryptography. The Japan-UK deal introduces provisions to shield cryptography from a range of government requirements, such as sharing or disclosing keys. However:

What is different here is that the UK also introduces a specific exception for law enforcement to demand access to encrypted communications and for financial regulation. This is not surprising given that the UK is at the forefront of government demands to access data from encrypted messaging systems such as WhatsApp or Telegram. There are no public interest policy exceptions in this area, however limited, only demands from courts, regulators or police.

Of course, the UK is not alone in wanting access to encrypted communications: a recent Techdirt article noted that the EU is also looking to require what is euphemistically termed "lawful access" -- that is, backdoors -- to end-to-end encrypted communications, as is the US and elsewhere. What is significant here is that the exception is being introduced in the context of trade, with no expert input or public debate about the details, as are the other measures discussed above. The danger is that this will become a common practice, where governments try to slip through contentious digital policies as short sections of obscure but far-reaching trade deals.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

8 Comments | Leave a Comment..

Posted on Techdirt - 11 November 2020 @ 3:26am

Surprise: Latest Draft Of The EU's Next Big Privacy Law Includes Some Improvements

from the expect-massive-lobbying-push-to-remove-them dept

The EU's new ePrivacy regulation is a strange beast. It's important, designed to complement the EU's GDPR. Where the GDPR is concerned with personal data "at rest" -- how it is stored and processed -- the ePrivacy Regulation can be thought of as dealing with personal data in motion. Despite that importance, it is largely unknown, except to people working in this area. That low profile is particularly strange given the fierce fighting that is taking place over what exactly it should allow or forbid. Businesses naturally want as much freedom as possible to use personal data as they wish, while privacy activists want the new regulation to strengthen the protection already provided by the GDPR.

A new draft version of the ePrivacy regulation has appeared from the Presidency of the EU Council, currently held by Germany. It is a nearly illegible mess of deletions and additions, but it contains some welcome improvements from the previous version (pdf), which was released in March 2020. One relates to the protection of the "end-users' terminal equipment" -- a legalistic way of saying the device used by the user. The DataGuidance site summarizes what's new here as follows:

in relation to the protection of end-users' terminal equipment information, the current Draft ePrivacy Regulation has introduced, in Article 8(1)(c), a more strict wording, providing that, in order for the use of the terminal equipment to be necessary for the provision of a service requested by the end-user, the same must be 'strictly technically necessary' for providing an information society service 'specifically' requested by the end-user. In addition, the current Draft ePrivacy Regulation has reintroduced Article 8(1)(da) and (e), addressing the use of processing and storage capabilities of terminal equipment and the collection of information from end-users' terminal equipment that are necessary for security purposes and for software update.

But the most significant change from the previous version concerns the controversial issue of "legitimate interests". This was perhaps the biggest loophole in the previous draft, since it allowed companies to collect personal information from their users if:

it is necessary for the purpose of the legitimate interests pursued by a service provider to use processing and storage capabilities of terminal equipment or to collect information from an end-user's terminal equipment, except when such interest is overridden by the interests or fundamental rights and freedoms of the end-user.

The concept of "legitimate interests" was so vague that it essentially allowed companies to do pretty much whatever they wanted with sensitive personal information they gathered. The latest draft from the German Presidency deletes this section completely. That's good news for users of online services, but predictably, telecoms companies are unhappy. In a letter sent to the EU, seen by Euractiv, they write:

We are finding that the latest text has taken a dramatic step back, disregarding the constructive compromises achieved so far, negating the positions and interests of many EU Member States and threatening the stability of the digital economy and its growth potential

Clearly, then, there is going to be yet another big fight over this latest move, as lobbyists try to get the "legitimate interests" section re-instated. The ePrivacy saga continues.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

3 Comments | Leave a Comment..

Posted on Techdirt - 4 November 2020 @ 8:21pm

To Prevent Free, Frictionless Access To Human Knowledge, Publishers Want Librarians To Be Afraid, Very Afraid

from the because-security dept

After many years of fierce resistance to open access, academic publishers have largely embraced -- and extended -- the idea, ensuring that their 35-40% profit margins live on. In the light of this subversion of the original hopes for open access, people have come up with other ways to provide free and frictionless access to knowledge -- most of which is paid for by taxpayers around the world. One is preprints, which are increasingly used by researchers to disseminate their results widely, without needing to worry about payment or gatekeepers. The other is through sites that have taken it upon themselves to offer immediate access to large numbers of academic papers -- so-called "shadow libraries". The most famous of these sites is Sci-Hub, created by Alexandra Elbakyan. At the time of writing, Sci-Hub claims to hold 79 85 million papers.

Even academics with access to publications through their institutional subscriptions often prefer to use Sci-Hub, because it is so much simpler and quicker. In this respect, Sci-Hub stands as a constant reproach to academic publishers, emphasizing that their products aren't very good in terms of serving libraries, which are paying expensive subscriptions for access. Not surprisingly, then, Sci-Hub has become Enemy No. 1 for academic publishers in general, and the leading company Elsevier in particular. The German site Netzpolitik has spotted the latest approach being taken by publishers to tackle this inconvenient and hugely successful rival, and other shadow libraries. At its heart lies the Scholarly Networks Security Initiative (SNSI), which was founded by Elsevier and other large publishers earlier this year. Netzpolitik explains that the idea is to track and analyze every access to libraries, because "security":

Elsevier is campaigning for libraries to be upgraded with security technology. In a SNSI webinar entitled "Cybersecurity Landscape -- Protecting the Scholarly Infrastructure", hosted by two high-ranking Elsevier managers, one speaker recommended that publishers develop their own proxy or a proxy plug-in for libraries to access more (usage) data ("develop or subsidize a low cost proxy or a plug-in to existing proxies").

With the help of an "analysis engine", not only could the location of access be better narrowed down, but biometric data (e.g. typing speed) or conspicuous usage patterns (e.g. a pharmacy student suddenly interested in astrophysics) could also be recorded. Any doubts that this software could also be used -- if not primarily -- against shadow libraries were dispelled by the next speaker. An ex-FBI analyst and IT security consultant spoke about the security risks associated with the use of Sci-Hub.

Since academic publishers can't compete against Sci-Hub on ease of use or convenience, they are trying the old "security risk" angle -- also used by traditional software companies against open source in the early days. Yes, they say, Sci-Hub/open source may seem free and better, but think of the terrible security risks… An FAQ on the main SNSI site provides an "explanation" of why Sci-Hub is supposedly a security risk:

Sci-Hub may fall into the category of state-sponsored actors. It hosts stolen research papers which have been harvested from publisher platforms often using stolen user credentials. According to the Washington Post, the US Justice Department is currently investigating the founder of Sci-Hub, Alexandra Elbakayan, for links between her and Russian Intelligence. If there is substance to this investigation, then using Sci-Hub to access research papers could have much wider ramifications than just getting access to content that sits behind a paywall.

As Techdirt pointed out when that Washington Post article came out, there is no evidence of any connections between Elbakyan and Russian Intelligence. Indeed, it's hard not to see the investigation as simply the result of whining academic publishers making the same baseless accusation, and demanding that something be "done". An article in Research Information provides more details about what those "wider ramifications than just getting access to content that sits behind a paywall" might be:

In the specific case of Sci-Hub, academic content (journal articles and books) is illegally harvested using a variety of methods, such as abusing legitimate log in credentials to access the secure computer networks of major universities and by hijacking "proxy" credentials of legitimate users that facilitate off campus remote access to university computer systems and databases. These actions result in a front door being opened up into universities' networks through which Sci-Hub, and potentially others, can gain access to other valuable institutional databases such as personnel and medical records, patent information, and grant details.

But that's not how things work in this context. The credentials of legitimate users that Sci-Hub draws on -- often gladly "lent" by academics who believe papers should be made widely available -- are purely to access articles held on the system. They do not provide access to "other valuable institutional databases" -- and certainly not sensitive information such as "personnel and medical records" -- unless they are designed by complete idiots. That is pure scaremongering, while this further claim is just ridiculous:

Such activities threaten the scholarly communications ecosystem and the integrity of the academic record. Sci-Hub has no incentive to ensure the accuracy of the research articles being accessed, no incentive to ensure research meets ethical standards, and no incentive to retract or correct if issues arise.

Sci-Hub simply provides free, frictionless access for everyone to existing articles from academic publishers. The articles are still as accurate and ethical as they were when they first appeared. To accuse Sci-Hub of "threatening" the scholarly communications ecosystem by providing universal access is absurd. It's also revealing of the traditional publishers' attitude to the uncontrolled dissemination of publicly-funded human knowledge, which is what they really fear and are attacking with the new SNSI campaign.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

48 Comments | Leave a Comment..

More posts from Glyn Moody >>


This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it