FBI's use of National Security Agency (NSA) data has been contentious ever since the details of mass surveillance were revealed by its contractor Edward Snowden in 2013.A Foreign Intelligence Surveillance Court has now ruled that FBI officials "improperly" searched through NSA records.This ruling puts a shadow on how American intelligence agencies collect surveillance data.It includes both upstream and downstream data and has been collected in the past under section 702 of the Foreign Intelligent Surveillance Act.The court found thousands of searches conducted between 2017 and 2018.Improper searches are in direct violation of the Fourth Amendment of the US Constitution.
Joseph Gordon-Levitt has some strong thoughts about YouTube and Instagram.The actor and director said Wednesday that the platforms are a "net negative" for human creativity."This business model is bad for people's creativity, especially young people," Gordon-Levitt said at TechCrunch Disrupt in San Francisco."If you're setting out to make a short film, for example, and you already have on your mind, 'What's going to get me the most likes, followers, subscribers, etc.,' that's not the creative process that's going to make you the most happy as a creative person."Gordon-Levitt is the founder of HitRecord, an online community where people collaborate on creative projects ranging from film to music to writing.While plenty of beautiful content and communities exist on YouTube and Instagram, Gordon-Levitt said, he has an issue with business models that offer "free" services in exchange for "the right to conduct mass surveillance" and apply machine-learning algorithms to massive data sets for the benefit of third-party advertisers.
Bloomberg today reported that Palantir Technologies, a Peter Thiel-founded company that builds mass-surveillance solutions for law enforcement agencies, will delay its highly-anticipated IPO indefinitely.Thiel‘s also reportedly sent a memo to employees indicating they shouldn’t expect the company to IPO within “the next three years.” That doesn’t necessarily mean it will IPO then, just that whatever business plan the company’s executives are working off now has changed from what appeared to be a sure bet for at least a $20 billion initial offering to… holding off for now.There’s certainly plenty to speculate about, however.For those unfamiliar with Palantir Technologies, it’s a company that purports to make software that can “predict” crime.It’s also one of the companies directly supporting the use of unconstitutional mass-surveillance techniques by ICE and other law enforcement agencies in the US.Despite the fact that Trump‘s administration has paid the company about $1.5 billion so far, it’s never turned a profit.
This is the fashion world satirized by Zoolander and Sacha Baron Cohen’s character Brüno; where wearability and even sartorial attractiveness is deprioritized in favor of crazy designs and artistic boundary pushing.Kate Rose’s clothing line Adversarial Fashion is as confusing as anything you’ll find on any catwalk in the world.They were discussing the rise of automatic number-plate recognition technology, which has been widely adopted in the U.S. at city, county, state and federal levels.The American psychologist Abraham Maslow once pointed out that to a person who has only a hammer, there’s a tendency for everything to look like a nail.As a person drives around and is captured by multiple cameras, automatic number-plate recognition systems provide a constantly-updated means of monitoring their approximate (or even specific) location.“It was pulling in data from misclassified things like billboards and picket fences.
A recent investigation, conducted by Mijente, revealed the US government has at least 29 active contracts with the company worth approximately $1.5 billion.This covers Palantir’s work with all four branches of the military, Border Patrol, ICE, and Homeland Security, but it doesn’t even touch on the hundreds of contracts it has with local law enforcement throughout the country.All in all the company’s valued at over $20 billion.Peter Thiel, the tech billionaire responsible for PayPal, current Facebook board member, and the co-founder of Palantir, is a vocal Conservative and prominent Trump supporter.He was the President’s seventh largest campaign donor in 2016 and remains a staunch ally.But it wasn’t President Trump who brought Palantir into the country’s fold, that distinguishment belongs to President Barrack Obama.
Ex-NSA contractor and whistleblower Edward Snowden's memoir, Permanent Record, is to be published on 17 September – Constitution Day in the US.The book claims to lay bare Snowden's role in helping to create the National Security Agency's system of mass surveillance and how he exposed it to public scrutiny.Snowden remains in Russia but there's a chance publication could result in an updated version of what older readers might remember as the Spycatcher affair – when the British government attempted to stop local publication of Peter Walker's book about his time in MI5.To be fair to Snowden, we'd expect his book to be a great deal more interesting than Spycatcher.The resulting furore saw thousands of copies imported from outside the UK and massively boosted sales for the book.Moscow resident Snowden is presumably still officially bound by a non-disclosure agreement with his previous employer – we're guessing he hasn't asked the NSA to review the book prior to publication.
The flagrant misuse of facial recognition tech by the police is finally being criticised by MPs, with calls to put a stop to it until there are proper regulations in place, which seems like it should've been a no-brainer.The House of Commons Science and Technology committee cited concerns over bias and accuracy, and also highlighted the fact that a database of custody images held by the fuzz hasn't been properly audited to remove images of people who were never convicted.The legality of facial recognition trials has also be called into the question, with the report stating that "there is growing evidence from respected, independent bodies that the ‘regulatory lacuna’ surrounding the use of automatic facial recognition has called the legal basis of the trials into question.The Government, however, seems to not realise or to concede that there is a problem."Such trials include unmarked police vehicles testing mass surveillance/facial recognition technology on an unwitting public, and other shady-as-shit practices like fining passers-by who don't want to be scanned by inferring guilt from their non-compliance.US lawmakers introduced legislation earlier this year that was aimed at blocking companies using face recognition tech from collecting and sharing people's data without their consent.
Reports have emerged, via the Intercept, suggesting two of the US’ most influential and powerful technology giants have indirectly been assisting the Chinese Government with its campaign of mass-surveillance and censorship.Both will try to distance themselves from the controversy, but this could have a significant impact on both firms.The drama here is focused around a joint-venture, the OpenPower Foundation, founded in 2013 by Google and IBM, but features members such as Red Hat, Broadcom, Mellanox, Xilinx and Rackspace.On the surface, Semptian is a relatively ordinary Chinese semiconductor business, but when you look at its most profitable division, iNext, the story becomes a lot more sinister.We can imagine the only people who are pleased at this news are the politicians who are looking to get their faces on TV by theatrically condemning the whole saga.iNext works with Chinese Government agencies by providing a product called Aegis.
Such transfers are broadly used for everything from email to a range of basic online services, in cases where servers may be based outside the European Union.The case, brought by Austrian privacy lawyer Max Schrems, is a continuation of a legal battle with Facebook that in 2015 saw the European Court of Justice (ECJ) strike down an agreement known as Safe Harbour, which had been used since the turn of the millennium to transfer EU data to the US.In the wake of Edward Snowden’s disclosures of US mass surveillance – of which Facebook was revealed to be a specific target – the ECJ found that Safe Harbour was invalid, since it was exposing EU citizens’ personal information to mass data collection by the US’ National Security Agency (NSA).In the wake of that decision, the US and the EU reached another arrangement known as the Privacy Shield, which includes additional privacy protections for EU citizens and allows them to make complaints in the US.While Privacy Shield operates specifically between the US and the EU, standard contractual clauses are much broader, covering transfers from the EU to any other jurisdiction around the world.That is exactly the problem being addressed in the case currently before the ECJ: campaigners argue that nothing essential has changed, and that Facebook is continuing to expose EU citizens’ data to a surveillance risk by transferring it to the US.
Border guards in the Chinese region of Xinjiang routinely install a secret spy app on the Android smartphones belonging to tourists, it has been reported.The border guards reportedly take tourists phones and install the spy app, which can steal emails, texts and contacts, along with information about the handset itself.China is known to have some of the most oppressive surveillance laws in the world, but it is worth noting that it is not the only country to inspect people’s phones when crossing the border.US customs officials for example can demand that travellers to the US must unlock their mobile devices for inspection.But now an investigation by carried out by The Guardian newspaper, along with The New York Times, and Süddeutsche Zeitung, found that the Chinese are actively targeting travellers entering the Chinese region of Xinjiang from Kyrgyzstan.Xinjiang is said to have a local Muslim population and the Chinese government has curbed freedoms in the province with the installation of facial recognition cameras on streets and in mosques, and reportedly has forced residents to download software that searches their phones.
Authorities in a tumultuous region of China are ordering tourists and other visitors to install spyware on their smartphones, it is claimed.The New York Times reported today that guards working the border with Krygyzstan, in China's Xinjiang region, have insisted visitors put an app called Fengcai on their Android devices – and that includes tourists, journalists, and other foreigners.The Android app is said to harvest details from the handset ranging from text messages and call records to contacts and calendar entries.It also apparently checks to see if the device contains any of 73,000 proscribed documents, including missives from terrorist groups, including ISIS recruitment fliers and bomb-making instructions.China being China, it also looks for information on the Dalai Lama and – bizarrely – mentions of a Japanese grindcore band.Visitors using iPhones had their mobes connected to a different, hardware-based device that is believed to install similar spyware.
Last month, I was at Detroit’s Metro Airport for a connecting flight to Southeast Asia.Presumably, most of my fellow fliers did not: I didn't hear a single announcement alerting passengers how to avoid the face scanners.With our faces becoming yet another form of data to be collected, stored, and used, it seems we’re sleep-walking toward a hyper-surveilled environment, mollified by assurances that the process is undertaken in the name of security and convenience.The facial recognition plan in US airports is built around the Customs and Border Protection (CBP) Biometric Exit Program, which utilizes face scanning technology to verify a traveler’s identity.The opportunity for this kind of biometric collection infrastructure to feed into a broader system of mass surveillance is staggering, as is its ability to erode privacy.Research shows that it is particularly unreliable for gender and racial minorities: one study, for example, found a 99 percent accuracy rate for white men, while the error rate for women who have darker skin reached up to 35 percent.
Edward Snowden, the notorious American whistleblower, has confirmed he relied on the censorship resistance of Bitcoin BTC to help leak data to journalists in 2013.Snowden made headlines across the globe after leaking highly classified information from the National Security Agency (NSA) when he was a CIA employee and subcontractor.The leak revealed several global surveillance programs, many operated by the NSA and the Five Eyes Intelligence Alliance with the help of telecommunication companies and European governments, prompting a global discussion about national security and individual privacy.Speaking via video conference at Bitcoin 2019, an industry conference held in San Francisco over the past couple of days, Snowden said: “The servers that I used to transfer this information to journalists were paid for using Bitcoin.”During his talk, Snowden praised Bitcoin‘s decentralized and permissionless nature and the freedom it gives users to exchange and transact without supervision, particularly in world driven by heightened surveillance facilitated by the emergence of technology.“Bitcoin is free money […] you are able to exchange and interact permissionless.
Jonathan McIntosh / Creative CommonsA group of policy experts assembled by the EU has recommended that it ban the use of AI for mass surveillance and mass “scoring of individuals;” a practice that potentially involves collecting varied data about citizens — everything from criminal records to their behavior on social media — and then using it to assess their moral or ethical integrity.The recommendations are part of the EU’s ongoing efforts to establish itself as a leader in so-called “ethical AI.” Earlier this year, it released its first guidelines on the topic, stating that AI in the EU should be deployed in a trustworthy and “human-centric” manner.The new report offers more specific recommendations.These include identifying areas of AI research that require funding; encouraging the EU to incorporate AI training into schools and universities; and suggesting new methods to monitor the impact of AI.However, the paper is only a set of recommendations at this point, and not a blueprint for legislation.
These were two of the investment and policy recommendations a group of more than 50 AI experts from across the European Union offered today.The potential for AI systems to harm humans may require governments to “provide appropriate safeguards to protect individuals and society,” the authors warned.Altogether, the independent investment and policy recommendations report includes 33 recommendations aimed at making Europe competitive on the global AI stage while guiding the creation of trustworthy, sustainable AI systems.The European Commission formed the AI High Level Expert Group (HLEG) in June 2018, and today’s report follows the April release of AI ethics guidelines.The report takes a closer look at what some European leaders have referred to as a third way, a path different from approaches being taken in the United States, where privacy concerns abound, and China, where facial recognition use has been called dystopian and earned international condemnation.“Europe can distinguish itself from others by developing, deploying, using, and scaling Trustworthy AI, which we believe should become the only kind of AI in Europe, in a manner that can enhance both individual and societal well-being,” the document reads.
An independent expert group tasked with advising the European Commission to inform its regulatory response to artificial intelligence — to underpin EU lawmakers’ stated aim of ensuring AI developments are “human centric” — has published its policy and investment recommendations.This follows earlier ethics guidelines for “trustworthy AI”, put out by the High Level Expert Group (HLEG) for AI back in April, when the Commission also called for participants to test the draft rules.The document includes warnings on the use of AI for mass surveillance and scoring of EU citizens, such as China’s social credit system, with the group calling for an outright ban on “AI-enabled mass scale scoring of individuals”.It also urges governments to commit to not engage in blanket surveillance of populations for national security purposes.(So perhaps it’s just as well the UK has voted to leave the EU, given the swingeing state surveillance powers it passed into law at the end of 2016.)“Governments should commit not to engage in mass surveillance of individuals and to deploy and procure only Trustworthy AI systems, designed to be respectful of the law and fundamental rights, aligned with ethical principles and socio-technically robust.”
Alphabet's controversial smart city project just took its biggest step forward.The project envisions the city of the future, with buildings made of environmentally sustainable timber and flexible, movable wall panels.Automatic awnings would protect people from downpours and sensors would measure how public facilities are being used.But Sidewalk Labs, first announced in 2015, has also faced blowback as privacy advocates worry about data collection and the potential of mass surveillance.The main development for the project would be in two neighborhoods called Quayside (pronounced "key-side") and Villiers West.Sidewalk on Monday said it's making a $900 million equity investment with local partners in real estate and advanced systems in Quayside and Villiers West.
A legal challenge to a data transfer mechanism that’s used by thousands of companies to authorize taking European citizens’ personal data to the US for processing has been delayed.As we reported last month, the General Court of the EU had set a date of July 1 and 2 to hear the complaint brought by French digital rights group, La Quadrature du Net, against the European Commission’s renegotiated data transfer agreement, the EU-US Privacy Shield.La Quadrature du Net has argued for years that Privacy Shield is incompatible with EU law as a result of US government mass surveillance practices — filing its first complaint back in October 2016.Nor is it alone in its concerns, with the European parliament, European data protection agencies, and privacy and data protection experts all raising questions about the legality of the arrangement which went into operation in August 2016.But in a series of tweets posted to Twitter today the digital rights group says it has been informed by the court that the hearing has been cancelled — in favor of waiting for the upshot of July 9 hearing.Irish judges went on to ask Europe’s top court to weigh in on a number of legal questions — including whether Privacy Shield ensures an adequate level of protection for EU citizens’ personal data, as EU law requires that it must.
They will have to evaluate how to make sure platforms play fair — and ensure that the initial embrace of sellers or service providers doesn’t evolve into crushing abuse.Then there’s hate speech and online disinformation.What’s to be done to shrink the democratic risks of political manipulation without trampling freedom of expression?This is not a theoretical threat; the predecessor arrangement that had stood for fifteen years was invalidated in 2015, after a legal challenge which drew on NSA whistleblower Edward Snowden’s revelations of US mass surveillance programs.And, as the Internet splinters into increasingly localized flavors, how will Europe prepare and position itself?Either way, battles are brewing.
A legal challenge to the UK’s controversial mass surveillance regime has revealed shocking failures by the main state intelligence agency, which has broad powers to hack computers and phones and intercept digital communications, in handling people’s information.The challenge, by rights group Liberty, led last month to an initial finding that MI5 had systematically breached safeguards in the UK’s Investigatory Powers Act (IPA) — breaches the Home Secretary, Sajid Javid, euphemistically couched as “compliance risks” in a carefully worded written statement that was quietly released to parliament.Today Liberty has put more meat on the bones of the finding of serious legal breaches in how MI5 handles personal data, culled from newly released (but redacted) documents that it says describe the “undoubtedly unlawful” conduct of the UK’s main security service which has been retaining innocent people’s data for years.The controversial surveillance legislation passed into UK law in November 2016 — enshrining a system of mass surveillance of digital communications which includes a provision that logs of all Internet users’ browsing activity be retained for a full year, accessible to a wide range of government agencies (not just law enforcement and/or spy agencies).The law also allows the intelligence agencies to maintain large databases of personal information on UK citizens, even if they are not under suspicion of any crime.The newly released court documents include damning comments on MI5’s handling of data by the IPCO — which writes that: “Without seeking to be emotive, I consider that MI5’s use of warranted data… is currently, in effect, in ‘special measures’ and the historical lack of compliance… is of such gravity that IPCO will need to be satisfied to a greater degree than usual that it is ‘fit for purpose'”.”