Tag: Privacy

This is a Feature, Not a Bug

As a result of a new privacy law in California, many businesses have reduced the amount of data that they collect:

Last year, a major U.S. airline went looking for all the things it knew about its passengers. Among the details it had gathered, the company found, were consumers’ food preferences—information that seems innocuous but that could also reveal a passenger’s religious beliefs if they select a kosher or halal meal. So the airline decided to stop saving the food-preference information, according to Integris, the data privacy startup that helped the airline review its data practices. (Integris declined to name its client.)

Instead, the airline will ask passengers what they’d like to eat before every flight.

Recently, treasure hunts like this one have been taking place across industries and all around the country. Companies are mapping the data that they own, and some, like the airline, are proactively scrubbing sensitive information to avoid trouble.

When companies cut back on hoarding sensitive data, consumers win. Less of their private information is susceptible to data breaches and leaks, viewable by unscrupulous company insiders, or available to be sold to data brokers or advertisers.

This is a surprising turn: Data about consumers can be wildly lucrative—it fuels a $100 billion-plus digital-advertising industry, among other things—and companies generally like to gather as much of it as they can. But something changed this year. A new state law, the California Consumer Privacy Act, or CCPA, has turned data from an unadulterated asset into a potential liability.


The CCPA, in effect since Jan. 1, grants several new digital rights to Californians. They can now ask companies for a copy of the information the firms know about them, limit how that data is shared or sold, and demand that it’s deleted altogether.

Businesses also have to disclose new details about the personal information they gather and who they share it with.

Many companies have been setting up new tools to allow Californians to exercise these new rights, and some, such as Microsoft, have extended them to all their customers. But the law has had a second-order effect, too, that has an impact on almost every consumer: It has pushed some firms to slim down their troves of personal consumer data.

That’s because the CCPA’s new transparency requirements make it less attractive to hoover up everything there is to know about consumers. By gathering less, a company can avoid having to make damning disclosures about what kinds of data it keeps, and potentially turn privacy into a selling point.

Plus, companies can now get in legal trouble if they’re found to have not taken “reasonable” measures to safeguard particularly sensitive data such as Social Security numbers—a good reason to just get rid of that information if they don’t need it.

“That’s a huge incentive for companies not to collect those categories of information unless they absolutely need to,” says Ross, who co-authored the California ballot initiative that led to the CCPA. 

This is an unalloyed good, because privacy is an unalloyed good.

And the Supreme Court Will Probably Buy this Bullsh%$

The right wingers at the Supreme Court have for years used the first amendment to shut down common sense regulation of predatory businesses.

My prediction is that they will do this again, and say that the first amendment protects ISP’s rights to resell your browser history:

The US state of Maine is violating internet broadband providers’ free speech by forcing them to ask for their customers’ permission to sell their browser history, according to a new lawsuit.


ACA Connects, CTIA, NCTA and USTelecom are collectively suing [PDF] Maine’s attorney general Aaron Frey, and the chair and commissioners of Maine’s Public Utilities Commission claiming that the statute, passed in June 2019, “imposes unprecedented and unduly burdensome restrictions on ISPs’, and only ISPs’, protected speech.”

How so? Because it includes “restrictions on how ISPs communicate with their own customers that are not remotely tailored to protecting consumer privacy.” The lawsuit even explains that there is a “proper way to protect consumer privacy” – and that’s the way the FCC does it, through “technology-neutral, uniform regulation.” Although that regulation is actually the lack of regulation.

If you’re still having a hard time understanding how requiring companies to get their customers’ permission before they sell their personal data infringes the First Amendment, the lawsuit has more details.

It “(1) requires ISPs to secure ‘opt-in’ consent from their customers before using information that is not sensitive in nature or even personally identifying; (2) imposes an opt-out consent obligation on using data that are by definition not customer personal information; (3) limits ISPs from advertising or marketing non-communications-related services to their customers; and (4) prohibits ISPs from offering price discounts, rewards in loyalty programs, or other cost saving benefits in exchange for a customer’s consent to use their personal information.”

All of this results in an “excessive burden” on ISPs, they claim, especially because not everyone else had to do the same. The new statute includes “no restrictions at all on the use, disclosure, or sale of customer personal information, whether sensitive or not, by the many other entities in the Internet ecosystem or traditional brick-and-mortar retailers,” the lawsuit complains.

Listen, I think that we should get some stakes, honey, and a few anthills of REALLY pissed off ants, and have a heart to heart with the senior executives of ACA Connects, CTIA, NCTA and USTelecom.

Perhaps we should bring in their lawyers for a consult as well.

When Barr Demands that Apple Unlock Their Phones

He claims that this is the only way for law enforcement to get into locked phones.

He is lying.

What this is really about is their wanting to be able to hack phones remotely, which, of course, will be used without a warrant by the US state security apparatus to do things like fight terrorism and spy on girl friends:

President Donald Trump’s bizarre friendship with his buddy Tim Cook is in trouble. With Apple once again refusing to allow the FBI to unlock a terrorist’s iPhone (two of them, actually, this time around), the president sent out a tweet the other day that said, “We are helping Apple all of the time on TRADE and so many other issues, and yet they refuse to unlock phones used by killers, drug dealers, and other violent criminal elements. They will have to step up to the plate and help our great Country.”


In the current situation, the two handsets that the FBI wants Apple to open belong to Mohammed Saeed Alshamrani. The latter allegedly killed three people last month at a Navy base in Pensacola, Florida during an act that is being called terrorism. Because the FBI asked Apple to unlock the phones, it appeared that companies like Cellebrite and Grayshift could not unlock any iPhones running on iOS 13. But Bloomberg reports that Cellebrite recently pushed out an update to its machines that will allow law enforcement agencies to extract and analyze information from several locked iPhone models.


And that brings us to this question, if the FBI can open both of Mohammed Saeed Alshamrani’s iPhones without Apple, why is President Trump, Attorney General Barr, and the FBI putting pressure on Apple to unlock these phones? Perhaps it has to do with setting a precedent for the future when Apple comes up with a way to block the latest technology used by Cellebrite and Grayshift. However, the president should tread lightly here; he certainly doesn’t want to lose the “friendship” he has with the man he once called Tim Apple.

Why are they putting pressure on apple?  Because Cellbrite and Grayshift’s devices require physical possession of the device, and hence a warrant.

They, and by that I mean the US state security apparatus, want to be able to spy on citizens without having to go to court.

Mom, They Are Being Evil Again!!!

On the heels of Google deciding to cripple ad blockers in their Chrome browser now that they have achieved a monopoly, now Google is rolling out a programming interface that will allow websites to see what programs are installed on your machine, which, among other things, deanonymize users, and possibly reveal a host of other personal data.

But Google wants to accommodate its advertisers, so f%$# the rest of us:

A nascent web API called getInstalledRelatedApps offers a glimpse of why online privacy remains such an uncertain proposition.

In development since 2015, Google has been experimenting with the API since the release of Chrome 59 in 2017. As its name suggests, it is designed to let web apps and sites determine whether a corresponding native app is installed on a user’s device.

The purpose of the API, as described in the proposed specification, sounds laudable. More and more, the docs state, users will have web apps and natives apps from the same source installed on the same device and as the apps’ feature sets converge and overlap, it will become important to be able to distinguish between the two, so users don’t receive two sets of notifications, for example.

But as spec editor and Google engineer Rayan Kanso observed in a discussion of the proposed browser plumbing, the initiative isn’t really about users so much as web and app publishers.

Late last month, after Kanso published notice of Google’s intent to officially support the API in a future version of Chrome, Daniel Bratell, a developer for the Opera browser, asked how this will help users.

“The mobile web already suffers from heavy handed attempts at getting web users to replace web sites with native apps and this mostly looks useful for funneling users from the open web to closed ecosystems,” Bratell said in a developer forum post.

Kanso made clear the primary focus of the proposal isn’t Chrome users.

“Although this isn’t an API that would directly benefit users, it indirectly benefits them through improved web experiences,” Kanso wrote. “We received very positive OT [off-topic] feedback from partners using this API, and the alternative is them using hacks to figure whether their native app is installed.”


That’s not say privacy concerns are ignored. On Wednesday, Google engineer Yoav Weiss joined the discussion to express concern about the API’s privacy implications.

“Knowing that specific apps were installed can contain valuable and potentially sensitive information about the user: income level, relationship status, sexual orientation, etc,” Weiss wrote, adding, “The collection of bits of answers to ‘Is app X installed?’ can be a powerful fingerprinting vector.”


And in a separate discussion Henri Sivonen, a Mozilla engineer, worried that the API might lead to more attempts to steer users away from the web and toward a native app, something websites like Reddit already try to do.

Google users are not the customer, they are product to be monetized.

Break them up.

Live in Obedient Fear, Citizen

The Owellian named Department of Homeland Security is looking to change regulations to require facial scans of US citizens at the border:

Homeland Security wants to expand facial recognition checks for travelers arriving to and departing from the U.S. to also include citizens, which had previously been exempt from the mandatory checks.

In a filing, the department has proposed that all travelers, and not just foreign nationals or visitors, will have to complete a facial recognition check before they are allowed to enter the U.S., but also to leave the country.


But although there may not always be a clear way to opt-out of facial recognition at the airport, U.S. citizens and lawful permanent residents — also known as green card holders — have been exempt from these checks, the existing rules say.

Now, the proposed rule change to include citizens has drawn ire from one of the largest civil liberties groups in the country.

“Time and again, the government told the public and members of Congress that U.S. citizens would not be required to submit to this intrusive surveillance technology as a condition of traveling,” said Jay Stanley, a senior policy analyst at the American Civil Liberties Union .


Citing a data breach of close to 100,000 license plate and traveler images in June, as well as concerns about a lack of sufficient safeguards to protect the data, Stanley said the government “cannot be trusted” with this technology and that lawmakers should intervene.

Our surveillance state is out of control.

ISPs Lie

The latest controversy over internet technology is browsers implementing DNS over HTTPS, which would prevent ISPs from tracking their users browser habits, and selling that information to 3rd parties.

Mozilla is claiming, with a lot of justification, that ISPs lied when lobbying against this technology:

Mozilla is urging Congress to reject the broadband industry’s lobbying campaign against encrypted DNS in Firefox and Chrome.

The Internet providers’ fight against this privacy feature raises questions about how they use broadband customers’ Web-browsing data, Mozilla wrote in a letter sent today to the chairs and ranking members of three House of Representatives committees. Mozilla also said that Internet providers have been giving inaccurate information to lawmakers and urged Congress to “publicly probe current ISP data collection and use policies.”

DNS over HTTPS helps keep eavesdroppers from seeing what DNS lookups your browser is making. This can make it more difficult for ISPs or other third parties to monitor what websites you visit.

“Unsurprisingly, our work on DoH [DNS over HTTPS] has prompted a campaign to forestall these privacy and security protections, as demonstrated by the recent letter to Congress from major telecommunications associations. That letter contained a number of factual inaccuracies,” Mozilla Senior Director of Trust and Security Marshall Erwin wrote.

This part of Erwin’s letter referred to an Ars article in which we examined the ISPs’ claims, which center largely around Google’s plans for Chrome. The broadband industry claimed that Google plans to automatically switch Chrome users to its own DNS service, but that’s not what Google says it is doing. Google’s publicly announced plan is to “check if the user’s current DNS provider is among a list of DoH-compatible providers, and upgrade to the equivalent DoH service from the same provider.” If the user-selected DNS service is not on that list, Chrome would make no changes for that user.


In addition to the broadband-industry letter to Congress, Comcast has been giving members of Congress a lobbying presentation that claims the encrypted-DNS plan would “centraliz[e] a majority of worldwide DNS data with Google” and “give one provider control of Internet traffic routing and vast amounts of new data about consumers and competitors.” Comcast and other ISPs are urging Congress to intervene.

But a number of the arguments ISPs made to lawmakers are “premised on a plan that doesn’t exist,” Erwin told Ars last week, referring to the ISPs’ claims about Google.


Mozilla’s letter to Congress said the ISP lobbying against encrypted DNS amounts to telecom associations “explicitly arguing that ISPs need to be in a position to collect and monetize users’ data. This is inconsistent with arguments made just two years earlier regarding whether privacy rules were needed to govern ISP data use.”


Web users are tracked by Google, Facebook, and other advertising companies, of course. ISPs, though, have “privileged access” to users’ browsing histories because they act as the gateway to the Internet, Erwin said to Ars.

There is already “remarkably sophisticated micro-targeting across the Web,” and “we don’t want to see that business model duplicated in the middle of the network,” he said. “We think it’s just a mistake to use DNS for those purposes.”


Mozilla has established specific policy requirements that DNS providers have to meet to earn a spot in Firefox’s encrypted-DNS program. For example, DNS resolvers must delete data that could identify users within 24 hours and only use that data “for the purpose of operating the service.” Providers also “must not retain, sell, or transfer to any third party (except as may be required by law) any personal information, IP addresses or other user identifiers, or user query patterns from the DNS queries sent from the Firefox browser.”

Do you really trust COMCAST to protect your privacy when their profits depend on NOT protecting your privacy?

I know that I don’t.

I Want This Phone Charger

An artist and programmer has come up with a charger that generates a flood of false information to thwart the attempts of the various internet giants to track you:

Martin Nadal, an artist and coder based in Linz, Austria, has created FANGo, a “defense weapon against surveillance capitalism” that is disguised as a mobile phone charger.

On his page introducing the device, Nadal explains that the inside of the charger hides a micro controller that takes control of an Android smartphone by accessing the operating system’s Debug Mode. The device then makes queries and interacts with pages on Google, Amazon, YouTube, and other sites “in order to deceive data brokers in their data capture process.” It works similar to a fake Apple lightning cable, now mass-produced, that hijacks your device once connected.

Tools to frustrate tracking attempts by advertisers or data brokers are not new—AdNauseam is a plugin that clicks on all ads, while TrackmeNot does random searches on different search engines. Such projects, however, exclusively focus on desktops and web browsers. “Today we interact with the internet from the mobile mostly,” Nadal told Motherboard in an email. “We also use applications, where there is no possibility of using these plugins that hinder the monitoring making the user helpless.”

The device’s name is an acronym for Facebook, Amazon, Netflix, and Google, who represent some of the most profitable companies in the world. Nadal, however, sees them as the engines of surveillance capitalism, a theorization of contemporary capitalism by Susanna Zuboff, a Harvard Business School professor emeritus.


Nadal is working on adding new features that might take such poisoning even further, using techniques such as geolocation spoofing. “[W]hile my phone is quietly charging at home, the data brokers think that I am walking or dining in another part of the city or world,” he said.

I love it.

A Well Deserved Take-Down

Following Blizzard banning a gamer and taking his prize money after he made pro-Hong King protests, they have been flooded by GDPR requests by customers who find their kowtowing to China unacceptable.

Complying with these demands are both extremely expensive and opens them up to massive fines:

Being a global multinational sure is hard! Yesterday, World of Warcraft maker Blizzard faced global criticism after it disqualified a high-stakes tournament winner over his statement of solidarity with the Hong Kong protests — Blizzard depends on mainland China for a massive share of its revenue and it can’t afford to offend the Chinese state.

Today, outraged games on Reddit’s /r/hearthstone forum are scheming a plan to flood Blizzard with punishing, expensive personal information requests under the EU’s expansive General Data Privacy Regulation — Blizzard depends on the EU for another massive share of its revenue and it can’t afford the enormous fines it would face if it failed to comply with these requests, which take a lot of money and resource to fulfill.

I really hope that this protest goes forward.

Blizzard is hoping that this will blow over in a few months, but if people put in requests now, they need to comply in the next 30 days or face massive fines, and that ain’t cheap.

Cue Nelson Muntz.

If We Enforced the Law, Half of San Jose Would be in Jail

I am referring, or course to the recent revelations that Twitter collected users phone numbers for 2 factor authentication and then sold them to advertisers.

The Silicon Valley business models are increasingly indistinguishable from fraud and various forms or racketeering:

When some users provided Twitter with their phone number to make their account more secure, the company used this information for advertising purposes, according to a blog post from Twitter published on Tuesday.

This isn’t the first time that a large social network has taken information explicitly meant for the purposes of security, and then quietly or accidentally use it for something else entirely. Facebook did something similar with phone numbers provided by users for two-factor authentication, the company confirmed last year.

“We recently discovered that when you provided an email address or phone number for safety or security purposes (for example, two-factor authentication) this data may have inadvertently been used for advertising purposes, specifically in our Tailored Audiences and Partner Audiences advertising system,” Twitter’s announcement reads.

In short, when an advertiser using Twitter uploaded their own marketing list of email addresses or phone numbers, Twitter may have matched the list to people on Twitter “based on the email or phone number the Twitter account holder provided for safety and security purposes,” the post adds.

“This was an error and we apologize,” it read.

This wasn’t error, it was greed and a disdain for their users, who are, after all, not customers, but the protect that they sell to their customers, the advertisers.

Rule 1 of Facebook: Facebook Lies

Rule 2 of Facebook is see rule 1.

They lied about not doing location tracking on their users:

Facebook has been caught bending the truth again – only this time it has been forced to out itself.

For years the antisocial media giant has claimed it doesn’t track your location, insisting to suspicious reporters and privacy advocates that its addicts “have full control over their data,” and that it does not gather or sell that data unless those users agree to it.

No one believed it. So, when it (and Google) were hit with lawsuits trying to get to the bottom of the issue, Facebook followed its well-worn path to avoiding scrutiny: it changed its settings and pushed out carefully worded explanations that sounded an awful lot like it wasn’t tracking you anymore. But it was. Because location data is valuable.

Then, late on Monday, Facebook emitted a blog post in which it kindly offered to help users “understand updates” to their “device’s location settings.”


You may have missed the critical part amid the glowing testimony so we’ll repeat it: “… use precise location even when you’re not using the app…”

Huh, fancy that. It sounds an awful lot like tracking. After all, why would you want Facebook to know your precise location at all times, even when you’re not using its app? And didn’t Facebook promise it wasn’t doing that?


Well, yes it did, and it was being economical with the truth. But perhaps the bigger question is: why now? Why has Facebook decided to come clean all of a sudden? Is it because of the newly announced antitrust and privacy investigations into tech giants? Well, yes, in a roundabout way.

Surprisingly, in a moment of almost honesty which must have felt quite strange for Facebook’s execs, the web giant actually explains why it has stopped pretending it doesn’t track users: because soon it won’t be able to keep up the pretense.

“Android and iOS have released new versions of their operating systems, which include updates to how you can view and manage your location,” the blog post reveals.

That’s right, under pressure from lawmakers and users, both Google and Apple have added new privacy features to their upcoming mobile operating systems – Android and iOS – that will make it impossible for Facebook to hide its tracking activity.

So, Facebook is admitting that they lied only because continuing to lie is completely impossible.

F%$# Zuck.  Better yet, how about a serious investigation of allegation of fraud regarding false users and ad sales?

Thanks, Mark

Hundreds of millions of phone numbers linked to Facebook accounts have been found online.

The exposed server contained more than 419 million records over several databases on users across geographies, including 133 million records on U.S.-based Facebook users, 18 million records of users in the U.K., and another with more than 50 million records on users in Vietnam.

But because the server wasn’t protected with a password, anyone could find and access the database.

Each record contained a user’s unique Facebook ID and the phone number listed on the account. A user’s Facebook ID is typically a long, unique and public number associated with their account, which can be easily used to discern an account’s username.


Some of the records also had the user’s name, gender and location by country.

Seriously, f%$# Zuck.

Not Enough Bullets………

After making a complete dogs breakfast of his time at Equifax, Former CEO Richard Smith has secured a $20+ million payout to leave:

For a majority of workers, failure at the workplace is deeply frowned upon and frequently incurs the ultimate penalty—dismissal, usually accompanied with a pittance for severance pay. Yet, in many ways, corporate executives remain above the rigmarole of a pay-for-performance model. A blue-chip executive can run a company to the ground and still be guaranteed a big payday in the form of a multi-million dollar golden sendoff. A few days ago, Equifax Inc., one of the largest consumer credit reporting agencies on the land, made headlines after agreeing to pay a total of $700 million to the U.S. government for claims tied to a massive data breach two years ago.

The data breach will go down as one of the largest ever after private information including social security data from 150 million consumers–about 56 percent of America’s population–was compromised.


Former CEO Richard Smith, on the other hand, is set to collect ~$19.6 million in stock bonuses that cover part of his performance in the year the hack took place, not to mention a generous offer to cover his medical bills for life; a $24-million pension and $50,000 in tax and financial planning services.

That’s roughly 1,000x the maximum payout to affected customers.

Seriously, I am sick to death of the heads I win, tails you lose bullsh%$ corrupt business management in America.

By all rights, this guy should be asking, “Do you want fries with that?”

Facial Recognition, How Does It F%$#ing Work?

The ICP Song Miracles, where the lyrics for the title come from

It appears that Juggalo makeup prevents facial recognition software fail:

Last year, Ticketmaster and LiveNation invested in a former military facial recognition company, with the hope that the technology could be used to both strengthen and speed up event entry. If that prospect thoroughly creeps you out, here’s a simple life-hack to defeat Big Brother: become a Juggalo. In a revelation that is sure to freak out the FBI, Insane Clown Posse’s passionate fan base have unintentionally unlocked the secret to thwarting facial recognition.

It turns out that Juggalos face makeup cannot be accurately read by many facial recognition technologies. Most common programs identify areas of contrast — like those around the eyes, nose, and chin — and then compare those points to images within a database. The black bands frequently used in Juggalo makeup obscure the mouth and cover the chin, totally redefining a person’s key features.

I’m considering wearing the makeup in my day to day life.

Have I Mentioned that Amazon is Evil Before?

When you “buy” a Prime Membership, you are selling yourself to them:

Amazon’s Alexa smart assistant may be useful, but the privacy concerns aren’t going away anytime soon.

Now, in a fresh turn of events, the retail giant has confirmed that it keeps transcripts and voice recordings indefinitely, and only removes them if they’re manually deleted by users.


Privacy in the Internet of Things space has already been a hot topic. Earlier this April, Bloomberg published a piece about how thousands of Amazon employees listen to voice recordings captured in Echo speakers, transcribing and annotating them to improve the Alexa digital assistant that powers the smart speakers.

Then in May, the retail behemoth came under further scrutiny for its data collection practices after CNET reported that Alexa assistant not only keeps your voice recordings, but also keeps a record of your voice transcriptions for improving its AI algorithms, with no option to delete them.


Amazon’s response points out that even developers of Alexa skills can keep a record of every transaction or routinely scheduled activity a user makes with an Echo device. “When a customer interacts with an Alexa skill, that skill developer may also retain records of the interaction,” the company wrote in its response.


But the lack of clarity surrounding its data collection and retention policies has revived debates over the sometimes conflicting goals of convenience and privacy. And Coons isn’t exactly satisfied with Amazon‘s reply.

“Amazon‘s response leaves open the possibility that transcripts of user voice interactions with Alexa are not deleted from all of Amazon‘s servers, even after a user has deleted a recording of his or her voice,” he said in a statement. “What’s more, the extent to which this data is shared with third parties, and how those third parties use and control that information, is still unclear.”

I believe the technical term for this sort of business and technical practice is, “Dystopian.”

Internet group brands Mozilla ‘internet villain’ for supporting DNS privacy feature – TechCrunch

An ISP group in the UK is claiming that Mozilla is making users less safe by implementing DNS-over-HTTPS, because it won’t allow the ISPs to filter the sites that the UK government wants them to ban people from.

My guess is that they are really upset because it makes it much tougher for ISPs to collect data to resell to advertisers.

I call hypocrisy for their accusation that Mozilla is an, ‘internet villain’ for using DNS-over-HTTPS:

An industry group of internet service providers has branded Firefox browser maker Mozilla an “internet villain” for supporting a DNS security standard.

The U.K.’s Internet Services Providers’ Association (ISPA), the trade group for U.K. internet service providers, nominated the browser maker for its proposed effort to roll out the security feature, which they say will allow users to “bypass UK filtering obligations and parental controls, undermining internet safety standards in the UK.”

Mozilla said late last year it was planning to test DNS-over-HTTPS to a small number of users.

Whenever you visit a website — even if it’s HTTPS enabled — the DNS query that converts the web address into an IP address that computers can read is usually unencrypted. The security standard is implemented at the app level, making Mozilla the first browser to use DNS-over-HTTPS. By encrypting the DNS query it also protects the DNS request against man-in-the-middle attacks, which allow attackers to hijack the request and point victims to a malicious page instead.

DNS-over-HTTPS also improves performance, making DNS queries — and the overall browsing experience — faster.


The ISPA’s nomination quickly drew ire from the security community. Amid a backlash on social media, the ISPA doubled down on its position. “Bringing in DNS-over-HTTPS by default would be harmful for online safety, cybersecurity and consumer choice,” but said it encourages “further debate.”

One internet provider, Andrews & Arnold, donated £2,940 — around $3,670 — to Mozilla in support of the nonprofit. “The amount was chosen because that is what our fee for ISPA membership would have been, were we a member,” said a tweet from the company.

Mozilla spokesperson Justin O’Kelly told TechCrunch: “We’re surprised and disappointed that an industry association for ISPs decided to misrepresent an improvement to decades old internet infrastructure.”

“Despite claims to the contrary, a more private DNS would not prevent the use of content filtering or parental controls in the UK. DNS-over-HTTPS (DoH) would offer real security benefits to UK citizens. Our goal is to build a more secure internet, and we continue to have a serious, constructive conversation with credible stakeholders in the UK about how to do that,” he said.

F%$# the ISPA.

Live in Obediant, Fear, Citizen

Amazon is routinely listening to your Alexa without your knowledge:

Tens of millions of people use smart speakers and their voice software to play games, find music or trawl for trivia. Millions more are reluctant to invite the devices and their powerful microphones into their homes out of concern that someone might be listening.

Sometimes, someone is.

Amazon.com Inc. employs thousands of people around the world to help improve the Alexa digital assistant powering its line of Echo speakers. The team listens to voice recordings captured in Echo owners’ homes and offices. The recordings are transcribed, annotated and then fed back into the software as part of an effort to eliminate gaps in Alexa’s understanding of human speech and help it better respond to commands.

The Alexa voice review process, described by seven people who have worked on the program, highlights the often-overlooked human role in training software algorithms. In marketing materials Amazon says Alexa “lives in the cloud and is always getting smarter.” But like many software tools built to learn from experience, humans are doing some of the teaching.

The team comprises a mix of contractors and full-time Amazon employees who work in outposts from Boston to Costa Rica, India and Romania, according to the people, who signed nondisclosure agreements barring them from speaking publicly about the program. They work nine hours a day, with each reviewer parsing as many as 1,000 audio clips per shift, according to two workers based at Amazon’s Bucharest office, which takes up the top three floors of the Globalworth building in the Romanian capital’s up-and-coming Pipera district. The modern facility stands out amid the crumbling infrastructure and bears no exterior sign advertising Amazon’s presence.

Well, that’s reassuring, isn’t it, Romanian hackers and Indian robocallers listening in on your home.

The work is mostly mundane. One worker in Boston said he mined accumulated voice data for specific utterances such as “Taylor Swift” and annotated them to indicate the searcher meant the musical artist. Occasionally the listeners pick up things Echo owners likely would rather stay private: a woman singing badly off key in the shower, say, or a child screaming for help. The teams use internal chat rooms to share files when they need help parsing a muddled word—or come across an amusing recording.

And then, you become a running gag at the next Christmas party.

If they want people in a petri dish so that they can tweak their algorithms, all they need to do is get their informed consent, pay them, and tell them when it is on or off, but that is inconvenient and expensive, so once again Eric Arthur Blair is spinning in his grave.

Mark Zuckerberg is a Lying Sack of Excrement, Part VMCMLXIX

This is not a surprise.

All evidence indicates that, at least until it got caught, Facebook did not care about how its data was used until it became a public relations debacle, because they still got their money:

Facebook knew about Cambridge Analytica’s dodgy data-gathering practices at least four months before they was exposed in news reports, according to internal FB emails.

Crucially, the staff memos contradict public assurances made by Facebook CEO Mark Zuckerberg as well as sworn testimony offered by the company.

Those emails remain under a court seal, at Facebook’s request, although the Attorney General of Washington DC, Karl Racine, is seeking to have them revealed to all as part of his legal battle against the antisocial media giant.

Racine’s motion to unseal [PDF] the files this month stated “an email exchange between Facebook employees discussing how Cambridge Analytica (and others) violated Facebook’s policies” includes sufficient detail to raise the question of whether Facebook has – yet again – given misleading or outright false statements.

The redacted request reads: “The jurisdictional facts in the document shows that as early as September 2015, a DC-based Facebook employee warned the company that Cambridge Analytica was a “[REDACTED]” asked other Facebook employees to “[REDACTED]” and received responses that Cambridge Analytica’s data-scraping practices were “[REDACTED]” with Facebook’s platform policy.”

It goes on: “The Document also indicates that months later in December 2015, on the same day an article was published by The Guardian on Cambridge Analytica, a Facebook employee reported that she had ‘[REDACTED].'”

The reason this is critical is because Facebook has always claimed it learned of Cambridge Analytica’s misuse of people’s profile information – data obtained via a third-party quiz app built by Aleksandr Kogan – from press reports. Zuckerberg said in a statement more than two years later: “In 2015, we learned from journalists at The Guardian that Kogan had shared data from his app with Cambridge Analytica. It is against our policies for developers to share data without people’s consent, so we immediately banned Kogan’s app from our platform.”

Zuck omitted, incidentally, that Facebook threatened to sue the newspaper if it published its story. Facebook also admitted today that its executives have claimed the same thing as their boss under oath – that the social network only learned about the data misuse from press reports.


The truth is that Facebook is a train wreck with executives encouraged to do whatever they wanted in order to secure Facebook’s position in the digital economy and bring in revenue, regardless of laws or ethics or morals or anything else.

Its work culture is fundamentally broken with top executives making it plain that the company will obfuscate, mislead, block and bully before they even consider telling the truth – and that culture attracts more of the same.

Even by the notoriously lax ethical standards of the tech industry, Facebook is a particularly bad actor.

Mark Zuckerberg’s Lips Are Moving Again

Mark Zuckerberg has promised that Facebook will not store data in in countries that, “have a track record of violating human rights like privacy or freedom of expression.”

It appears that everything in Singapore is just hunky-dory for him though:

Mark Zuckerberg laid out his vision for Facebook’s pivot to privacy on Wednesday in a lengthy blog, but it hasn’t taken long for the shine of some of his pronouncements to be dimmed.

Detailing plans to keep user information safe, the Facebook CEO boasted that the company has chosen not to store data in countries that “have a track record of violating human rights like privacy or freedom of expression.”

“If we build data centers and store sensitive data in these countries, rather than just caching non-sensitive data, it could make it easier for those governments to take people’s information,” he said.


But within hours of Zuckerberg publishing his 3,200-word missive, it was pulled apart by human rights groups.

In September last year, Facebook announced that it was spending $1 billion building a new data center in Singapore. Zuckerberg posted about the news on his Facebook page, saying it would be the company’s 15th data center and its first in Asia.


“Singapore is a seriously rights-abusing government that spends an inordinate amount of time trying to intimidate and harass those who express views the government doesn’t like,” Phil Robertson, deputy director of Human Rights Watch’s Asia division, told Business Insider.

In related news, Roger McNamee, an early investor in Facebook, and what I assume is a former friend of the Facebook founder, called out his “Facebook Manifesto” as unmitigated bullsh%$.

They really need to find someone with more credibility than Zuckerberg, or Sheryl Sandberg as their spokesperson.

I would suggest Mohammed Saeed al-Sahhaf, AKA Baghdad Bob, as an improvement.

David “Joe Isuzu” Leisure would work too.