Tag: Privacy

Kill it With Fire

In response to the anti-trust lawsuit filed against it, Google will no longer give favorable placement to media outlets that use its AMP HTML dialectt.

This is a good thing.

First, AMP sucks, second, it was an invitation for Google to violate user privacy and extend its ad and search monopolies, and third, AMP sucks:

Four years after offering special placement in a “top stories carousel” in search results to entice publishers to use a format it created for mobile pages, called AMP, Google announced last week that it will end that preferential treatment in the spring.

“We will prioritize pages with great page experience, whether implemented using AMP or any other web technology, as we rank the results,” Google said in a blog post.

The company had indicated in 2018 that it would drop the preference eventually. Last week’s announcement of a concrete timeline comes less than a month after the Department of Justice called Google a “monopoly gatekeeper to the internet” in a lawsuit alleging antitrust violations and as pressure mounts on officials in the European Union, which has already fined Google more than $9 billion for antitrust violations.

“I did always think AMP posed antitrust concerns,” said Sally Hubbard, author of the book “Monopolies Suck” and an antitrust expert with the Open Markets Institute. “It’s, ‘If you want to show up on the top of the search results, you have to play by our rules, you have to use AMP.’ ”

………

Whatever prompted the timing of the change, some news sites are relieved that they won’t have to keep using Google’s preferred mobile standard.

“We are encouraged to see Google beginning to outline a path away from AMP,” Robin Berjon, head of data governance at The New York Times, said in a written statement in response to questions from The Markup. “It’s important Google addresses the core challenge with the format, so that it is no longer a requirement for news products and performance ranking.”

News publishers and others have been griping about AMP for years. Some called it Google’s attempt to exert the same kind of control over the larger web that Facebook exerts over posts in its closed system.

That’s because AMP is more than just a set of formatting rules. Once a website sets up an AMP page, Google copies it and stores it on Google servers. When users click on the link for an AMP page in search results—or its news reading app—Google serves up that cached version from its servers.

“AMP keeps users within Google’s domain and diverts traffic away from other websites for the benefit of Google,” read a 2018 open letter signed by more than 700 technologists and advocates. “At a scale of billions of users, this has the effect of further reinforcing Google’s dominance of the Web.”

………

In an analysis published by The Markup earlier this year of 15,269 popular searches on Google, we found that AMP-enabled results appeared often, taking up more than 13 percent of the first results page. Google took another 41 percent of the page for its own products.

………

As the news industry struggled over the past decade, with dropping newspaper subscription rates and ad revenue and plateauing online traffic leading to massive job losses, many publishers adopted AMP in hopes that it would help their bottom lines. Most of the roughly 2,000 members of the News Media Alliance, a trade organization that represents newspapers, use it.

“They don’t really feel there is a choice,” said Danielle Coffey, the group’s general counsel and senior vice president.

Her opinion is widely shared.

“We essentially have a coercion by Google upon publishers to allow people to host their content,” said Andrew Betts, a former member of the Technical Architecture Group at the international web standards organization W3C, who has written about his concerns with AMP. “And publishers who decide they don’t want that to happen because they want to serve their own content, thanks very much, will not ever appear in the first set of search results.”

………

And AMP sometimes causes issues that publishers lack the power to fix on their own. In one prominent example, publishers discovered there was no way to allow users to opt out of having their data sold, a requirement under the California Consumer Privacy Act, which went into effect this year.

Talk about burying the lede.

AMP allows Google to take control of user data from media outlets.

Now we know why Google pushed it so hard, they wanted to slurp up more user data.

This Apple Screw Up Points to a Bigger Picture

Apple rolled out a new operating system for its Macintosh computers, Big Sur, and it slowed down the operation of every Mac with an online connection, whether or not they were running, or even capable of running the upgrade:

Mac users today began experiencing unexpected issues that included apps taking minutes to launch, stuttering and non-responsiveness throughout macOS, and other problems. The issues seemed to begin close to the time when Apple began rolling out the new version of macOS, Big Sur—but it affected users of other versions of macOS, like Catalina and Mojave.

Other Apple services faced slowdowns, outages, and odd behavior, too, including Apple Pay, Messages, and even Apple TV devices.

It didn’t take long for some Mac users to note that trustd—a macOS process responsible for checking with Apple’s servers to confirm that an app is notarized—was attempting to contact a host named ocsp.apple.com but failing repeatedly. This resulted in systemwide slowdowns as apps attempted to launch, among other things.

The big picture here is not that Apple screwed up.  The big picture here, as Jeffrey Paul notes is that your computer no longer belongs to you.  It is under the direct control of a corporation who may or may not have your best interests at heart:

It’s here. It happened. Did you notice?

I’m speaking, of course, of the world that Richard Stallman predicted in 1997. The one Cory Doctorow also warned us about.

On modern versions of macOS, you simply can’t power on your computer, launch a text editor or eBook reader, and write or read, without a log of your activity being transmitted and stored.

It turns out that in the current version of the macOS, the OS sends to Apple a hash (unique identifier) of each and every program you run, when you run it. Lots of people didn’t realize this, because it’s silent and invisible and it fails instantly and gracefully when you’re offline, but today the server got really slow and it didn’t hit the fail-fast code path, and everyone’s apps failed to open if they were connected to the internet.

………

This means that Apple knows when you’re at home. When you’re at work. What apps you open there, and how often. They know when you open Premiere over at a friend’s house on their Wi-Fi, and they know when you open Tor Browser in a hotel on a trip to another city.

“Who cares?” I hear you asking.

Well, it’s not just Apple. This information doesn’t stay with them:

  1. These OCSP requests are transmitted unencrypted. Everyone who can see the network can see these, including your ISP and anyone who has tapped their cables.

  2. These requests go to a third-party CDN run by another company, Akamai.

  3. Since October of 2012, Apple is a partner in the US military intelligence community’s PRISM spying program, which grants the US federal police and military unfettered access to this data without a warrant, any time they ask for it. In the first half of 2019 they did this over 18,000 times, and another 17,500+ times in the second half of 2019.

This data amounts to a tremendous trove of data about your life and habits, and allows someone possessing all of it to identify your movement and activity patterns. For some people, this can even pose a physical danger to them.

Big brother is here, and he’s inside of the house.

So Not a Surprise

Is anyone surprised that U.S. tech firms are refusing to obey EU data transfer regulations

Criminality is an integral part of the Silicon Valley ethos.  That’s what, “Move Fast and Break Things,” means.

Technology firms’ compliance with European restrictions on transatlantic data transfers is shockingly poor, Austrian privacy campaigner Max Schrems said on Monday, publishing a survey here of companies including Facebook and Netflix.

The Court of Justice of the European Union (CJEU) ruled in July that the data arrangement set up in 2016, called Privacy Shield, was invalid under Europe’s privacy framework because of concerns about U.S. surveillance.

………

Exercising the right of customers to ask companies how their data is handled under the EU’s General Data Protection Regulation (GDPR), the survey drew a mixed bag of responses – some firms did not respond and others gave misleading answers.

………

“Overall, we were astonished by how many companies were unable to provide little more than a boilerplate answer,” said Schrems.

“The companies that did provide answers largely are simply not complying with the CJEU judgment. It seems that most of the industry still does not have a plan as to how to move forward.”

This IS their plan for moving forward:  Break the law and force the EU to go after them.

Until assets are seized, or executives are arrested, the lawbreaking will continue.

F%$# Zuck

In yet another case of wrongdoing, which they claim was a bug, Facebook has been caught spying on Instagram users though their phone cameras

Given that each time that this happens, it is an action that further reinforces Facebook model of stalker capitalism, I am not inclined to believe that this was an accident, and I am not inclined to accept their routine (and insincere) apology:

Facebook Inc. is again being sued for allegedly spying on Instagram users, this time through the unauthorized use of their mobile phone cameras.

The lawsuit springs from media reports in July that the photo-sharing app appeared to be accessing iPhone cameras even when they weren’t actively being used.

Facebook denied the reports and blamed a bug, which it said it was correcting, for triggering what it described as false notifications that Instagram was accessing iPhone cameras.

In the complaint filed Thursday in federal court in San Francisco, New Jersey Instagram user Brittany Conditi contends the app’s use of the camera is intentional and done for the purpose of collecting “lucrative and valuable data on its users that it would not otherwise have access to.”  

………

The case is Conditi v. Instagram, LLC, 20-cv-06534, U.S. District Court, Northern District of California (San Francisco).

Facebook is a criminal enterprise.

Would You like Cheese to Go with Your Whine?

The FBI has issued a warning that doorbell cams may tip off residents that the police are planning a raid on their home.

Typical cops. More surveillance, until it touches on them:

The rise of the internet-connected home security camera has generally been a boon to police, as owners of these devices can (and frequently do) share footage with cops at the touch of a button. But according to a leaked FBI bulletin, law enforcement has discovered an ironic downside to ubiquitous privatized surveillance: The cameras are alerting residents when police show up to conduct searches.

A November 2019 “technical analysis bulletin” from the FBI provides an overview of “opportunities and challenges” for police from networked security systems like Amazon’s Ring and other “internet of things,” or IoT, devices. Marked unclassified but “law enforcement sensitive” and for official use only, the document was included as part of the BlueLeaks cache of material hacked from the websites of fusion centers and other law enforcement entities.

The “opportunities” described are largely what you’d expect: Sensor-packed smart devices create vast volumes of data that can be combed through by curious investigators, particularly “valuable data regarding device owners’ movements in real-time and on a historic basis, which can be used to, among other things, confirm or contradict subject alibis or statements.”

The downside for police, who have rushed to embrace Ring usage nationwide as the Amazon subsidiary aggressively marketed itself to and sealed partnerships with local departments, is that networked cameras record cops just as easily as the rest of us. Ring’s cameras are so popular in part because of how the company markets their ability to detect motion at your doorstep, providing convenient phone alerts of “suspicious activity,” however you might define it, even when you’re out of the house. But sometimes the police are the unannounced, unwanted visitor: “Subjects likely use IoT devices to hinder LE [law enforcement] investigations and possibly monitor LE activity,” the bulletin states. “If used during the execution of a search, potential subjects could learn of LE’s presence nearby, and LE personnel could have their images captured, thereby presenting a risk to their present and future safety.”

The document describes a 2017 incident in which FBI agents approached a New Orleans home to serve a search warrant and were caught on video. “Through the Wi-Fi doorbell system, the subject of the warrant remotely viewed the activity at his residence from another location and contacted his neighbor and landlord regarding the FBI’s presence there,” it states.

Sauce for the Gander.

Can You Say ……… Dystopian? Good — I Knew You Could

Google is looking at providing information services to employers to help them control their healthcare costs.

To put that into English, Google will collect enormous amounts of data about its clients employees in order flag people who are engaging in “Unhealthy Lifestyles” and mitigate employer exposure to healthcare costs.

Basically, they will spy on employees, and provide information that employers can use to meddle in their employees eat, when they sleep, etc.

And, though Google (Alphabet) will deny it, employers will use this data to fire employees who are flagged as healthcare cost risks.

If you want a picture of the Google’s future, imagine a boot stamping on a human face— forever:*

Without much fanfare, Verily, Alphabet’s life sciences unit, has launched Coefficient Insurance. It was only a matter of time before Google’s parent got into the health insurance business — in fact, one wonders what took it so long. With Google’s intimate knowledge of our daily patterns, contacts and dreams, the search engine group has for years had a far better picture of risk than any insurer.

That Coefficient Insurance, which is also backed by Swiss Re, would initially focus on the relatively arcane area of stop-loss insurance to protect employers from staff health cost volatility should not obscure its ambitious agenda for the rest of the industry. Thus, according to Verily’s senior management, it might soon start monitoring at-risk employees via their smartphones and even coaching them towards healthier lifestyles.

………

As with many services out of Silicon Valley, there is not much reflection about the probable reconfigurations of power among social groups — the sick and the healthy, the insured and the uninsured, the employers and the employees — that are likely to occur once the digital dust settles.

One would need to be extremely naive to believe that a more extensive digital surveillance system — in the workplace and, with Alphabet running the show, now also at home, in the car and wherever your smartphone takes you — is likely to benefit the weak and the destitute. Some good might come out of it — a healthier workplace, maybe — but we should also inquire who would bear the cost of this digital utopia.

………

Privacy law does not offer an adequate solution either. Under pressure from employers, most workers acquiesce to being monitored. This was obvious even before Alphabet’s foray into insurance, as plenty of smaller players have been pitching employers sophisticated workplace surveillance systems as a way of lowering healthcare costs.

If this does not scare the hell out of you, you have not been paying attention.

*Apologies to George Orwell.

Don’t Throw Me in That Briar Patch

Facebook is completely losing its sh%$ because the next version of the iPhone operating system will require that users explicitly opt in to being spied on by advertisers.
They have actually issued an apology of sorts, which is just about as sincere as their apologies for spying on their users:

Facebook has apologized to its users and advertisers for being forced to respect people’s privacy in an upcoming update to Apple’s mobile operating system – and promised it will do its best to invade their privacy on other platforms.

The antisocial network that makes almost all of its revenue from building a vast, constantly updated database of netizens that it then sells access to, is upset that iOS 14, due out next month, will require apps to ask users for permission before Facebook grabs data from their phones.

“This is not a change we want to make, but unfortunately Apple’s updates to iOS14 have forced this decision,” the behemoth bemoans before thinking the unthinkable: that it may have to end its most intrusive analytics engine for iPhone and iPad users.
“We know this may severely impact publishers’ ability to monetize through Audience Network on iOS 14, and, despite our best efforts, may render Audience Network so ineffective on iOS 14 that it may not make sense to offer it on iOS 14 in the future.”

Amazingly, despite Facebook pointing out to Apple that it is tearing away people’s right to have their privacy invaded in order to receive ads for products they might want, Cupertino continues to push ahead anyway.

………

Facebook wants advertisers to know however that it has their back. It will continue to suck as much information as possible off every other device and through every other operating system.

………

Facebook closes out by promising that it will do all it can to prevent user privacy from being respected in future. “We believe that industry consultation is critical for changes to platform policies, as these updates have a far-reaching impact on the developer ecosystem,” it said. “We’re encouraged by conversations and efforts already taking place in the industry – including within the World Wide Web Consortium (W3C) and the recently announced Partnership for Responsible Addressable Media (PRAM). We look forward to continuing to engage with these industry groups to get this right for people and small businesses.”

 What can I say but, “F%$# Zuck.”

Mandy Rice-Davies Applies*

Of course they are.  They are opposed to anything that would make it harder for them to monetize our privacy:

The trade group representing many of the largest technological security companies is urging regulators not to overreach on facial recognition restrictions even as U.S. lawmakers push to rein in police use of the software.

The Security Industry Association, which represents NEC Corp., France’s Idemia Group, Japan’s Ayonix Corp. and others, will release on Tuesday day a 10-point framework urging policy-makers, companies and governments to embrace the benefits of the technology, while upholding certain ethical principles.

SIA is defending government use of facial recognition at a time when some civil rights advocates, companies, and lawmakers are calling for police departments to stop using the technology. Critics want better guardrails to ensure facial recognition doesn’t promote racial biases in the criminal justice system.

Calls to curb law enforcement’s use of the technology grew louder during widespread public outrage over racial inequities following the death of George Floyd, an unarmed Black man, in Minneapolis police custody in May.

SIA’s policy principles, obtained by Bloomberg News, caution lawmakers not to adopt a “one-size-fits-all legislative framework.”

Here is a quick rule of thumb:  When businesses start proposing regulatory forbearance, or suggesting that, “One Size Fits All,” legislation (mark your bullsh%$ bingo card) would be a bad thing, and that they are proposing, “Policy Principles,” it means that they want business as usual to continue, typically by sucking the marrow out of the public space.

These folks want to make money by being evil, and they don’t care if they sell to corrupt and brutal cops in the USA, or Chinese authorities enforcing a genocide against the Uighurs. 

*Well, he would say that, wouldn’t he? Seriously, know your history.

Live in Obedient Fear, Citizen

It appears that multiple entities in the US State Security Apparatus tracked millions of people’s phones without a warrant despite Supreme Court decisions requiring one:

The Secret Service paid for a product that gives the agency access to location data generated by ordinary apps installed on peoples’ smartphones, an internal Secret Service document confirms.

The sale highlights the issue of law enforcement agencies buying information, and in particular location data, that they would ordinarily need a warrant or court order to obtain. This contract relates to the sale of Locate X, a product from a company called Babel Street.

In March, tech publication Protocol reported that multiple government agencies signed millions of dollars worth of deals with Babel Street after the company launched its Locate X product. Multiple sources told the site that Locate X tracks the location of devices anonymously, using data harvested by popular apps installed on peoples’ phones.

Protocol found public records showed that U.S. Customs and Border Protection (CBP) purchased Locate X. One former Babel Street employee told the publication that the Secret Service used the technology. Now, the document obtained by Motherboard corroborates that finding.

………

“As part of my investigation into the sale of Americans’ private data, my office has pressed Babel Street for answers about where their data comes from, who they sell it to, and whether they respect mobile device opt-outs. Not only has Babel Street refused to answer questions over email, they won’t even put an employee on the phone,” Senator Ron Wyden told Motherboard in a statement.

………

Government agencies are increasingly at the end of that location data chain. In February The Wall Street Journal reported that Immigration and Customs Enforcement (ICE) and other agencies bought an app-based location data product from a different firm called Venntel. Senator Wyden’s office then found the Internal Revenue Service (IRS) was also a Venntel customer.

Law enforcement agencies typically require a warrant or court order to compel a company to provide location data for an investigation. Many agencies have filed so-called reverse location warrants to ask Google to hand over information on what Android devices were in a particular area at a given time, for example. But an agency does not need to seek a warrant when it simply buys the data instead.

………

Senator Wyden is planning legislation that would block such purchases.

We should also forbid our intelligence agencies from having allies collect data that they are forbidden to collect, and the reverse for them.

The whole “Five Eyes” thing appears to be a way to allow intelligence agencies to spy on their own citizens by swapping who is looking at any given time.

Introducing the Mega-Morissette

Facebook is suing the EU claiming that its Brussels’ anti-trust investigation of the social media giant’s online markets constitutes a violation Facebook’s privacy.

In related news, Mark Zuckerberg murdered his parents and asked for mercy as an orphan.

This is f%$#ed up and sh%$:

American tech giants have enjoyed a reversal of their EU legal fortunes over the past fortnight as Euro nation courts issued rulings in their favor – and now Facebook has even sued the European Union itself, alleging the political bloc’s agencies broke their own data protection rules.

Facebook filed a lawsuit against EU competition regulators on Monday alleging that enforcers were improperly seeking access to sensitive employee personal data.

The anti social media network said in a statement to financial newswire Reuters that EU regulators had made “exceptionally broad” demands for documents during an antitrust investigation into Facebook’s online marketplace.

Facebook assistant general counsel Tim Lamb was quoted as saying, apparently with a straight face: “The exceptionally broad nature of the Commission’s requests means we would be required to turn over predominantly irrelevant documents that have nothing to do with the Commission’s investigations, including highly sensitive personal information such as employees’ medical information, personal financial documents, and private information about family members of employees.”

Notwithstanding Facebook’s business model of encouraging the world’s citizens to upload such details about themselves to Facebook’s services for the company to monetise, the suit has been filed in the EU General Court in Luxembourg. It includes demands for the court to halt further EU regulatory data demands against Facebook until further notice. An EU Commission spokesman said it would defend the lawsuit.

Facebook also told the newswire that EU agents had demanded copies of any internal Facebook document containing phrases such as “not good for us”, “big question” and “shut down”, among 2,500 others.

A Good Start

I’d like to see them ban the privacy invading ways of the various for profit Edu-Tech firms out there as well:

The New York legislature today passed a moratorium on the use of facial recognition and other forms of biometric identification in schools until 2022. The bill, which has yet to be signed by Governor Andrew Cuomo, comes in response to the launch of facial recognition by the Lockport City School District and appears to be the first in the nation to explicitly regulate or ban use of the technology in schools.

In January, Lockport became one of the only U.S. school districts to adopt facial recognition in all of its K-12 buildings, which serve about 5,000 students. Proponents argued the $1.4 million system made by Canada-based SN Technologies’ Aegis kept students safe by enforcing watchlists and sending alerts when it detected someone dangerous (or otherwise unwanted). It could also detect 10 types of guns and alert select district personnel and law enforcement. But critics said it could be used to surveil students and build a database of sensitive information the school district might struggle to keep secure.

While the Lockport schools’ privacy policy stated that the watchlist wouldn’t include students and the database would only cover non-students deemed a threat, including sex offenders or those banned by court order, district superintendent Michelle Bradley ultimately oversaw which individuals were added to the system. It was reported earlier this month that school board president John Linderman couldn’t guarantee student photos would never be included for disciplinary reasons.

Letting private companies profit from spying on our children is wrong.

News You Can Use

Researchers at the University of Chicago have a project named Fawkes, which poisons images so that AI cannot be trained by scraping them from public websites while the images remain nearly unchanged to human eyes.

I’m thinking that Imgur should offer this as a filter:

Researchers at the University of Chicago’s Sand Lab have developed a technique for tweaking photos of people so that they sabotage facial-recognition systems.

The project, named Fawkes in reference to the mask in the V for Vendetta graphic novel and film depicting 16th century failed assassin Guy Fawkes, is described in a paper scheduled for presentation in August at the USENIX Security Symposium 2020.

Fawkes consists of software that runs an algorithm designed to “cloak” photos so they mistrain facial recognition systems, rendering them ineffective at identifying the depicted person. These “cloaks,” which AI researchers refer to as perturbations, are claimed to be robust enough to survive subsequent blurring and image compression.

The paper [PDF], titled, “Fawkes: Protecting Privacy against Unauthorized Deep Learning Models,” is co-authored by Shawn Shan, Emily Wenger, Jiayun Zhang, Huiying Li, Haitao Zheng, and Ben Zhao, all with the University of Chicago.

………

The boffins claim their pixel scrambling scheme provides greater than 95 per cent protection, regardless of whether facial recognition systems get trained via transfer learning or from scratch. They also say it provides about 80 per cent protection when clean, “uncloaked” images leak and get added to the training mix alongside altered snapshots.

They claim 100 per cent success at avoiding facial recognition matches using Microsoft’s Azure Face API, Amazon Rekognition, and Face++. Their tests involve cloaking a set of face photos and providing them as training data, then running uncloaked test images of the same person against the mistrained model.

………

The researchers have posted their Python code on GitHub, with instructions for users of Linux, macOS, and Windows. Interested individuals may wish to try cloaking publicly posted pictures of themselves so that if the snaps get scraped and used to train to a facial recognition system – as Clearview AI is said to have done – the pictures won’t be useful for identifying the people they depict. 

If someone comes up with a simple tool, it should be used on every social social media post.

This is No Surprise

The European Court of Justice has ruled that servers in the US are insufficiently secure to comply with EU privacy regulations.

This is no surprise. The deal with the US has largely been a fig-leaf created as a result of brow-beating of European regulators by the US State Security Apparatus:

The European Union’s top court on Thursday threw a large portion of transatlantic digital commerce into disarray, ruling that data of E.U. residents is not sufficiently protected from government surveillance when it is transferred to the United States.

The ruling was likely to increase transatlantic tensions at a moment when President Trump has already been threatening tariffs and retaliation against the E.U. for what he says are unfair business practices. It was a victory for privacy advocates, who said that E.U. citizens are not as protected when their information is transferred to U.S. servers as when that information stays inside Europe.

The European Court of Justice ruled that a commonly used data protection agreement known as Privacy Shield did not adequately uphold E.U. privacy law.

………

The court said that it was unacceptable for E.U. citizens not to have “actionable rights” to question U.S. surveillance practices.

European data privacy advocates celebrated the decision.

It’s a good thing that the US State Security apparatus is finally getting some push-back internationally.

Rinse, Lather, Repeat

Some things never change:

Facebook has admitted that it wrongly shared the personal data of ‘inactive’ users for longer than it was authorized to, as revealed in a blog post from the company.

The social media giant estimates the error saw around 5,000 third-party app developers continue to receive information about users who had previously used Facebook to sign into their apps, even if users hadn’t used the app in the past 90 days.

Exceeding that time frame goes against Facebook’s policy, which promises third-party apps would no longer be able to receive personal information about a user if they had not accessed the app within the last 90 days.

………

The 90-day limit was introduced as part of Facebook’s overhaul of its privacy settings, following the Cambridge Analytica scandal in 2018 which saw an estimated 87 million users have their personal data harvested by the now defunct political consulting firm without consent.

This is something that happens with Facebook on a VERY regular basis.

This is not an error, it is deliberate policy.

Not a Surprise

Former deputy CIA Director Avril Haines is a major security advisor to the Biden campaign.

She has scrubbed her bio to remove any reference to Palantir, Peter Thiel’s surveillance contractor.

To say that Palantir is controversial, given Thiel’s prominent position with the right wing, and the firm’s function as a cut-out to enable surveillance by the US State Security Apparatus would be an understatement:

In the run-up to the 2020 election, former Vice President Joe Biden’s campaign is putting together a foreign policy team for a potential future administration. Among those described as being part of the team is Avril Haines, former deputy director of the CIA during the Obama administration. According to an NBC News report from last week, Haines has been tapped to work advising on policy, as well as lead the national security and foreign policy team.

In addition to her past national security work and impressive presence in the D.C. think tank world, Haines has in the past described herself as a former consultant for the controversial data-mining firm Palantir. Haines’s biography page at the Brookings Institute, where she is listed as a nonresident senior fellow, boasted of this affiliation until at least last week, when it suddenly no longer appeared on the page.

The nature of the consulting work that Haines did for Palantir is not clear. As of press time, requests for comment to her, the Biden campaign, Palantir, and Brookings were not answered. Prior to being removed from the Brookings page, the connection to the data-mining company was listed alongside a long list of other affiliations that were similarly pared down.

The affiliation — and its apparent disappearance — raises questions for a campaign that has posed itself as the antithesis to President Donald Trump’s far-right governance. Co-founded by a far-right, Trump-supporting tech billionaire, Palantir, whose business has benefited from a slew of government contracts, has been accused of aiding in the Trump administration’s immigration detention programs in the U.S. and helping the Trump administration build out its surveillance state.

Palantir has been profiting off of invading people’s privacy for the state since the Bush administration, and anyone having an involvement with the organization should be viewed with a lot of suspicion.

The ties to the Trump administration aren’t the only aspect of Palantir’s history that raises questions. The company has also been accused in the past of plotting to intimidate journalists involved in reporting documents released by WikiLeaks. And Palantir has also provided services to police — another move that appears to put the company out of step with the current political moment. The company also aided the National Security Agency by creating the tools to facilitate worldwide spying.

Haines’ involvement with Palantir is problematic when juxtaposed with her prominent position in the Biden campaign.

The decision to scrub her record is even more concerning.

Corrupt Violent Thugs

I am referring to, of course, of the Sergeant’s Benevolent Association of New York City, who just published the arrest report of mayor Bill de Blasio’s daughter’s arrest for participating in a protest with a tweet.

This is blatantly illegal, and there is a whole chain of command that was behind this.

And still de Blasio is shilling for the NYPD, even though he was elected on promises of reforming that department, and even though he cannot run again for mayor because of term limits.

I guess that cringing becomes becomes second nature with enough practice:

Among the hundreds of protesters arrested over the four days of demonstrations in New York City over the killing of George Floyd in Minneapolis, only one was highlighted by name by a police union known for its hostility toward Mayor Bill de Blasio.

The name of that protester? Chiara de Blasio, the mayor’s daughter.

The union, the Sergeants Benevolent Association, used Twitter to post a police report documenting the arrest on Saturday night of Ms. de Blasio, 25.

The Police Department does not normally release internal police reports, and Ms. de Blasio’s contained personal details, including her height, weight, address, date of birth and driver’s license information.

The post was removed for violation of Twitter rules, and the union’s account was suspended Monday morning.

“The account is temporarily locked for violating our private information policy,” a Twitter spokesman confirmed.

Citing safety concerns, Twitter prohibits users from posting other people’s “private information” without their consent, a practice known as “doxxing.”

This a deeply corrupt, and deeply corrupting, action made by principals in the Sergeants Benevolent Association, and it needs to be treated as the crime (felony) that it is.

This is an active conspiracy against civilian oversight of the police force, so it is not just a matter of petty corruption, it is an act reminiscent of a secret police.

Live in Obedient Fear, Citizens

The Senate has voted to allow warrantless collection of your web browsing history.

The US Senate has voted to give law enforcement agencies access to web browsing data without a warrant, dramatically expanding the government’s surveillance powers in the midst of the COVID-19 pandemic.

The power grab was led by Senate majority leader Mitch McConnell as part of a reauthorization of the Patriot Act, which gives federal agencies broad domestic surveillance powers. Sens. Ron Wyden (D-OR) and Steve Daines (R-MT) attempted to remove the expanded powers from the bill with a bipartisan amendment.

But in a shock upset, the privacy-preserving amendment fell short by a single vote after several senators who would have voted “Yes” failed to show up to the session, including Bernie Sanders. Nine Democratic senators also voted “No,” causing the amendment to fall short of the 60-vote threshold it needed to pass.

Yes, I am very disappointed that Sanders was not there to support the amendment, and mad as hell at the Democrats who voted against the the amendment to strip this from the bill.

Hopefully, Nancy Pelosi won’t ram it through the House.

Who am I kidding, OF COURSE Nancy Pelosi will ram it through the house.

Fuck No!

It appears that Bernie Sanders is being pressured to turn over his donor list to Joe Biden and the Democratic Party establishment (There is no Democratic Party establishment).

Fuck that.

The people who donated to you do not want to be a an asset to be managed by some hack political consultant whose only qualification is their close relationships with members of the Democratic Party establishment (There is no Democratic Party establishment).

Please, just don’t.

Good Riddance

Seriously, having Google running your life sounds even worse than George Orwell’s worst nightmares:

When Google sibling Sidewalk Labs announced in 2017 a $50 million investment into a project to redevelop a portion of Toronto’s waterfront, it seemed almost too good to be true. Someday soon, Sidewalk Labs promised, Torontonians would live and work in a 12-acre former industrial site in skyscrapers made from timber—a cheaper and more sustainable building material. Streets paved with a new sort of light-up paver would let the development change its design in seconds, able to play host to families on foot and to self-driving cars. Trash would travel through underground chutes. Sidewalks would heat themselves. Forty percent of the thousands of planned apartments would be set aside for low- and middle-income families. And the Google sister company founded to digitize and techify urban planning would collect data on all of it, in a quest to perfect city living.

………

But Sidewalk Labs’ vision was in trouble long before the pandemic. Since its inception, the project had been criticized by progressive activists concerned about how the Alphabet company would collect and protect data, and who would own that data. Conservative Ontario premier Doug Ford, meanwhile, wondered whether taxpayers would get enough bang from the project’s bucks. New York-based Sidewalk Labs wrestled with its local partner, the waterfront redevelopment agency, over ownership of the project’s intellectual property and, most critically, its financing. At times, its operators seemed confounded by the vagaries of Toronto politics. The project had missed deadline after deadline.

The partnership took a bigger hit last summer, when Sidewalk Labs released a splashy and even more ambitious 1,524-page master plan for the lot that went well beyond what the government had anticipated, and for which the company pledged to spend up to $1.3 billion to complete. The redevelopment group wondered whether some of Sidewalk Labs’ proposals related to data collection and governance were even “in compliance with applicable laws.” It balked at a suggestion that the government commit millions to extend public transit into the area, a commitment, the group reminded the company, that it could not make on its own.

Seriously, giving your city to a profit driven ghoulish mega-corporation seems to be hihg on the list of really stupid ideas.

Good that it is over.

Don’t Use Zoom

Ever wondered how the @zoom_us macOS installer does it’s job without you ever clicking install? Turns out they (ab)use preinstallation scripts, manually unpack the app using a bundled 7zip and install it to /Applications if the current user is in the admin group (no root needed). pic.twitter.com/qgQ1XdU11M

— Felix (@c1truz_) March 30, 2020

We now have news of a litany of privacy breaches and misrepresentations of its capabilities.

Given their history, the logical conslusion is that they have violating their users’ private as a central part of their business model:

Zoom, the video conferencing service whose use has spiked amid the Covid-19 pandemic, claims to implement end-to-end encryption, widely understood as the most private form of internet communication, protecting conversations from all outside parties. In fact, Zoom is using its own definition of the term, one that lets Zoom itself access unencrypted video and audio from meetings.

With millions of people around the world working from home in order to slow the spread of the coronavirus, business is booming for Zoom, bringing more attention on the company and its privacy practices, including a policy, later updated, that seemed to give the company permission to mine messages and files shared during meetings for the purpose of ad targeting.

Still, Zoom offers reliability, ease of use, and at least one very important security assurance: As long as you make sure everyone in a Zoom meeting connects using “computer audio” instead of calling in on a phone, the meeting is secured with end-to-end encryption, at least according to Zoom’s website, its security white paper, and the user interface within the app. But despite this misleading marketing, the service actually does not support end-to-end encryption for video and audio content, at least as the term is commonly understood. Instead it offers what is usually called transport encryption, explained further below.

………

But when reached for comment about whether video meetings are actually end-to-end encrypted, a Zoom spokesperson wrote, “Currently, it is not possible to enable E2E encryption for Zoom video meetings. Zoom video meetings use a combination of TCP and UDP. TCP connections are made using TLS and UDP connections are encrypted with AES using a key negotiated over a TLS connection.”

The encryption that Zoom uses to protect meetings is TLS, the same technology that web servers use to secure HTTPS websites. This means that the connection between the Zoom app running on a user’s computer or phone and Zoom’s server is encrypted in the same way the connection between your web browser and this article (on https://theintercept.com) is encrypted. This is known as transport encryption, which is different from end-to-end encryption because the Zoom service itself can access the unencrypted video and audio content of Zoom meetings. So when you have a Zoom meeting, the video and audio content will stay private from anyone spying on your Wi-Fi, but it won’t stay private from the company. (In a statement, Zoom said it does not directly access, mine, or sell user data; more below.)

………

“They’re a little bit fuzzy about what’s end-to-end encrypted,” Green said of Zoom. “I think they’re doing this in a slightly dishonest way. It would be nice if they just came clean.”

………

Without end-to-end encryption, Zoom has the technical ability to spy on private video meetings and could be compelled to hand over recordings of meetings to governments or law enforcement in response to legal requests. While other companies like Google, Facebook, and Microsoft publish transparency reports that describe exactly how many government requests for user data they receive from which countries and how many of those they comply with, Zoom does not publish a transparency report. On March 18, human rights group Access Now published an open letter calling on Zoom to release a transparency report to help users understand what the company is doing to protect their data.

Not just a subpoena.  If you bribe a Zoom employee, you could get access to the chat.

Also, Zoom has been found to be sharing user data with Facebook, even if you are not a member, refused to fix a remote access vulnerability until reported to the FTC, allowing meeting hosts to spy on user’s window status on their PCs, and collects personally identifiable data and links it to your IP address.

This is not a company that you want to deal with.