Tag: Computer

Tweet of the Day

This is unbelievably true:

I’ve been thinking about the movie Johnny Mnemonic lately, and how it turns out that that the most unrealistic thing about that movie is corporations giving a damp shit about data security.

— “Observing A Whole Lot of Anti-Chinese Racism” Cat (@No_X_in_Nixon) February 13, 2020

Whenever you hear of a computer hack, know that it is far more likely the result of shortsighted and parsimonious policies from companies than it is a super hacker.

Until there is personal jeopardy for executives who are reckless without data, this will continue.

What’s the Problem with an Encryption Back Door?

After successfully creating a health care app for doctors to view medical records, Diego Fasano, an Italian entrepreneur, got some well-timed advice from a police officer friend: Go into the surveillance business because law enforcement desperately needs technological help.

In 2014, he founded a company that creates surveillance technology, including powerful spyware for police and intelligence agencies, at a time when easy-to-use encrypted chat apps such as WhatsApp and Signal were making it possible for criminal suspects to protect phone calls and data from government scrutiny.

The concept behind the company’s product was simple: With the help of Italy’s telecom companies, suspects would be duped into downloading a harmless-seeming app, ostensibly to fix network errors on their phone. The app would also allow Fasano’s company, eSurv, to give law enforcement access to a device’s microphone, camera, stored files and encrypted messages.


“I started to go to all the Italian prosecutors’ offices to sell it,” explained Fasano, a 46-year-old with short, dark-brown hair and graying stubble. “The software was good. And within three years, it was used across Italy. In Rome, Naples, Milan.”

Even the country’s foreign intelligence agency, L’Agenzia Informazioni e Sicurezza Esterna, came calling for Exodus’s services, Fasano said.

But Fasano’s success was short lived, done in by a technical glitch that alerted investigators that something could be amiss. They followed a digital trail between Italy and the U.S. before unearthing a stunning discovery.

Authorities found that eSurv employees allegedly used the company’s spyware to illegally hack the phones of hundreds of innocent Italians—playing back phone conversations of secretly recorded calls aloud in the office, according to legal documents. The company also struck a deal with a company with alleged links to the Mafia, authorities said.

The discovery prompted a criminal inquiry involving four Italian prosecutor’s offices. Fasano and another eSurv executive, Salvatore Ansani, were charged with fraud, unauthorized access to a computer system, illicit interception and illicit data processing.


The demand for such technology has been driven in part by the rise in popularity of encrypted mobile phone apps and the reality that it is getting harder for law enforcement to glean evidence without the assistance of Silicon Valley giants such as Apple Inc., which is currently at loggerheads with the FBI over access to an iPhone used by an accused terrorist.


What makes the allegations against eSurv so astounding is that, if true, the company became involved in the spying itself—and did so right in the heart of Europe.


“I think that no prosecutors in Western countries have ever worked on a case like this,” Melillo said in a recent interview at his Naples office. This story is based on interviews with Italian authorities and a review of 170 pages of documents outlining the evidence collected, much of it never before reported.

In the city of Benevento, about 40 miles northeast of Naples, technicians working for the prosecutor’s office in 2018 were using Exodus to hack the phones of suspects in an investigation. That October, one of the technicians noticed that the network connection to Exodus was frequently dropping out, according to Italian authorities.

The technician did some troubleshooting and found a glaring problem. The Exodus system was supposed to operate from a secure internal server accessible only to the Benevento prosecutor’s office. Instead, it was connecting to a server accessible to anyone on the internet, protected only by a username and password, the authorities said.

The implications were enormous: hackers could potentially gain access to the platform and view all of the data that Italian prosecutors were covertly harvesting from suspects’ phones in some of Italy’s most sensitive law enforcement investigations. (Authorities don’t know if the server was in fact ever hacked.)


The investigation was eventually handed off to the prosecutor’s office in nearby Naples, which is responsible for handling major computer crimes in the region. The Naples prosecutor began a more in-depth probe—and found that eSurv had been storing a vast amount of sensitive data, unencrypted, on an Amazon Web Services server in Oregon.

The data included thousands of photos, recordings of conversations, private messages and emails, videos, and other files gathered from hacked phones and computers. In total, there were about 80 terabytes of data on the server—the equivalent of roughly 40,000 hours of HD video.

“A large part of the data is secret data,” said Melillo. “It’s related to the investigation of Mafia cases, terrorist cases, corruption cases.”


When Fasano began thinking about creating a police surveillance tool, he recruited a small team to explore the possibilities. They eventually developed a spyware tool that would allow police to hack Android phones by luring suspects into downloading what looked like an ordinary app from the Google Play store.


The app didn’t contain spy software, allowing it to bypass Google’s automated virus scans. But once a person downloaded it, the app served as a gateway through which eSurv could place spyware onto a person’s phone. The spyware would then covertly take total control: recording audio, taking photos and giving police access to encrypted messages and files, Fasano said.


In all, the Black Team spied on more than 230 people who weren’t authorized surveillance targets, according to police documents. Some of the surveillance victims were listed in eSurv’s internal files as “The Volunteers,” suggesting they were unwitting guinea pigs.


After reviewing evidence about the Black Team in May, a judge concluded that Exodus appeared to have been “designed and intended from the outset to operate with functions that are very distant from the canons of legality.” The judge approved a warrant to place Ansani and Fasano under house arrest; the investigation is continuing and additional charges could be filed, according to Italian authorities.


“It’s like a gun,” said Vincenzo Ioppoli, Fasano’s lawyer. “Once you have sold it, you don’t know how it will be used.”

This is why you can never trust law enforcement, or their contractors, not to abuse the power that you give them.

I am Unworthy………

I have been blogging for more than 12 years, and when I posted about the suspicious death of a Canadian cryptocurrency mogul, and subsequent efforts by shorted investors to exhume his body to confirm his death, I missed a most obvious and beautiful pun.

I have always claimed to be the worst writer on the internet, but today especially so, because I was not the one who came up with this:

Putting the Crypt in Crypto Currency

I am clearly unworthy.

Not the Source I Expect for Hard Investigative Journalism

The Hollywood Reporter has done a deep dive on the 2014 Sony Hack and finds convincing evidence that it was not the DPRK that hacked the studio:++

The massive cyberattack just before Thanksgiving 2014 crippled a studio, embarrassed executives and reshaped Hollywood. The FBI blamed a North Korea scheme to retaliate for the comedy ‘The Interview,’ but many whose lives were upended have doubts. Says Seth Rogen: “The fact that [co-director Evan Goldberg and I] were never really specifically targeted always raised suspicions in my head.”

On Jan. 23, 2015, a manager at Sony Pictures Entertainment shot off an email to a group of 12 in the studio’s distribution department that offered intel about an upcoming film from rival Disney. “Midwest exhibitors went into McFARLAND USA expecting a boring track & field movie but came away pleasantly surprised,” the manager noted about the sports drama that had been screened the day before. It was a mundane missive: a Hollywood executive sizing up the competition.

What is extraordinary about the email is what sources say it reveals about the 2014 Sony Pictures hack — and the official FBI narrative that pins it on North Korea. The email was drafted nearly nine weeks after the now infamous cyberattack ostensibly had been contained. It was passed along to a U.S. cyber researcher in February 2015 by a Ukrainian hacker as alleged proof that his Russian associate had breached Sony and could still do so at will. Despite FBI director James Comey’s “very high confidence” that Kim Jong Un was to blame, the Ukrainian source was maintaining that hackers were still accessing Sony’s system — and they weren’t North Korean.

Exactly five years have passed since the Sony hack, a seismic event that announced itself just before the Thanksgiving holiday on Nov. 24, 2014, when a menacing skeleton simultaneously popped up on thousands of Sony computer screens with the message: “We’ve obtained all your internal data including your secrets.”

That was followed by 22 days of massive data dumps that exposed embarrassing executive email exchanges (like one between then-co-chairman Amy Pascal and producer Scott Rudin in which he refers to Angelina Jolie as “a minimally talented spoiled brat”), trade secrets (including overtures from Marvel to bring Sony-owned Spider-Man into its universe) and five upcoming full-length films (such as Brad Pitt’s Fury). The breach, which former National Intelligence director James Clapper dubbed “the most serious cyberattack ever made against U.S. interests,” rocked the industry and forever altered how studios think about cybersecurity and the global impact of their content. In the aftermath, nearly all of Sony’s top management was swept out.

Although the FBI’s North Korea attribution was swift (it took just 25 days) and has never wavered, many of those impacted still harbor questions about what exactly happened when a previously unknown hacker group named Guardians of Peace decimated Sony’s computer infrastructure and brought one of the six major studios to its knees. THR spoke to more than two dozen insiders and executives who worked at Sony at the time, including some who still do, and more than half say they harbor doubts about the FBI’s official narrative, which maintains that the hack was a response from North Korea because leader Kim Jong Un objected to his depiction in Seth Rogen’s comedy The Interview.


Although the disgruntled-staffer angle generated headlines back in 2014, less explored is the prospect of someone using the hack as a weapon to manipulate the Sony share price. A number of investors sold large chunks of stock in 2014 between the supposed late September breach and the day the world learned of the attack on Nov. 24. There was also one spike in short-selling activity in the weeks leading up to Nov. 24. It is unclear if the SEC ever looked into Sony shortings or sell-offs given that SEC investigations are confidential unless it files an action in court.

This is not a smoking gun that the FBI was wrong, but it certainly raises significant doubts.

Also, it would not be the first time that the FBI ssemed more concerned with closing the case than it did with catching the actual malefactor.

Yes, $350 Screen Replacements are a Money Loser

So says Apple about its iPhone repairs, where it claims that it loses money on each repair that it makes.

So the unaffiliated repair shop down the street can fix it for $100.00, but apple can’t at 3½ times the price.

I want their accountant.

Actually, I don’t want their accountant, I want whatever their accountant is smoking:

It can be tough in the repair industry, and no one knows that better than struggling corporation Apple.

Cupertino has long been criticized for trying to control what its customers can do with their products, and especially so for charging what critics have said in an unjustifiable mark-up on repairing everything from iPhones to MacBooks.

But it’s just not true, the iGiant revealed this week to US Congress: in fact, despite charging between double and triple what other repair shops charge for fixing problems, Apple (2018 profit: $60bn) actually loses money on its repair business.

Asked by the House Judiciary subcommittee to “identify the total revenue that Apple derived from repair services,” the Cupertino idiot-tax operation revealed [PDF] that: “For each year since 2009, the costs of providing repair services has exceeded the revenue generated by repairs.”

That’s right, it may charge you $329 for a screen replacement that costs $100 everywhere else. Or $80 for a battery than costs $30 across the street. Or even $475 to replace a single key at an Apple store. But poor old Apple is making a loss every time.

Which is, of course, nonsense, though it’s interesting to explore how Apple can make the claim with a straight face. And the answer is creative accounting.


In short, Apple has, for years, carefully restricted the number of repair shops that can service its products in order to maintain artificially high prices – prices that it often sets for its authorized outlets. And it has gone to some lengths to discourage any repairs to its products outside of those authorized outlets or its own stores.

But people have grown fed up with the situation – hence the congressional review. That has resulted in a slow and carefully controlled expansion of independent repair shops approved by Apple. But even now someone at such an outlet has to go through an official Apple repair course before they’re allowed to touch its products. And Apple has put plenty of controls on both the course and any subsequent evaluation and approval of people that want to repair its products independently.

Apple defends this blatant market control in a dozen different ways in its responses, painting a picture of super-complex machinery that requires specialist and highly trained technicians. It’s nonsense but for some reason it’s effective, especially when people spend small fortunes on beloved electronics.


Even accounting for Apple’s BS however, how does it justify the claim that it is actually losing money on its repair business, despite charging multiples of what every other repair business does?

Easy: it counts its own ridiculous repair costs as what customers would have paid had they not taken out its over-price warranty. So if a customer pay $199 for AppleCare+ for their iPhone XS Max and brings it in to replace the screen, paying just $29 instead of the $329 out-of-warranty costs, Apple reckons it has just lost $101 – because that’s what the customer would have paid if they didn’t have a warranty.

Of course that completely ignores the fact that it costs Apple nowhere near $329 to replace the screen of a iPhone XS Max. We have no idea how much it does cost and Apple isn’t going to tell us either but that is how you get away with ripping people off while claiming poverty at the same time.

The cult of Apple is a manifestation of PT Barnum’s observation about the natural rate of increase of suckers,

The Other Problem With Self-Driving Cars

There are a number of claims as to the benefits, and one, that it would make transportation more efficient, has been shown to be objectively false in a study.

The study was fairly straightforward, they have people cars with drivers, and studied how their vehicle use changed.

Many more trips and many more miles driven, meaning more congestion and more waste and pollution:

A few years ago, Mustapha Harb realized there was a problem in his field of research about how autonomous cars will change the way people travel. The solution to the problem he settled on was as simple as it was revealing.


One did not have to look far for studies and articles suggesting fleets of self-driving cars could, for example, reduce traffic. These techno-utopian articles claimed the same highways we use today could, with slight modifications, accommodate many more autonomous vehicles than they do human-driven cars. AVs could, using more precise control systems, follow one another at much closer distances. Similarly, lanes could be narrowed, accommodating perhaps six lanes where there are only five today.

These promises were, and remain, the foundation upon which AV utopianism has been built: a greener, safer, faster, and more pleasant transportation future just around the corner.

But, Harb found, these promises couldn’t be checked. After all, self-driving cars didn’t exist yet.

Harb, a Ph.D. candidate at the University of California Berkeley’s Department of Civil and Environmental Engineering, was intimately familiar with the research already done on the subject in his field. Most of it consisted of surveying which, while far from perfect, was the best approach available.

“You would send people a survey,” Harb described, “like, hey, there’s a self-driving car in the future, how do you think your travel will change in the future?”

These studies, flawed as they were, found something very different from the rosy future AV companies wanted investors and the public to imagine. They found reason to believe AVs would drastically increase the number of vehicle miles traveled, commonly shortened to “VMT” in academic literature.

And the more vehicles miles traveled, all else being equal, the more traffic and emissions we can expect, canceling out many of the AV’s touted benefits.


While the survey results were potentially alarming, it was difficult for researchers like Harb to put too much stock into them. Some surveys predicted only a few percentage points increase in VMT in a self-driving car future. Others, upwards of 90 percent.


But his advisor, Professor Joan Walker, had an idea. What if they hired chauffeurs to drive random people around?

The chauffeur, Walker outlined, will do the driving for you. And, just like the most optimistic AV future of fully autonomous robot cars zooming around, you don’t even have to be in the car.

“All these things the self-driving car can do for you in the future,” Harb summarized, “a chauffeur can do for you today.”

The concept, once it reached published form, elicited praise and jealousy from other researchers. “It’s delightfully clever and brazenly simple,” gushed Don MacKenzie, head of their Sustainable Transportation Lab at the University of Washington. “I wish I had thought of it.”


For example, the chauffeur could bring the kids to soccer practice and back or drive a friend home and then return to the house. They could even pick up groceries and make a Target run to simulate a driverless car future where items could get bought online and loaded into your AV by a store employee before returning home.

Harb readily admits the study is not perfect, nor is it likely to prove the most accurate predictor of what our autonomous vehicle future looks like. But it is, by many estimates, the best first approximation we have.

And that approximation is, in key ways, a vision of things to come.

Harb thought they would see people sending their cars out more than if they were driving themselves, something like a 20 or 30 percent increase in VMT with the chauffeurs. Nothing to sneeze at, of course, but towards the middle of the wide range of the results the surveys had suggested.

He was wrong. The subjects increased how many miles their cars covered by a collective 83 percent when they had the chauffeur versus the week prior.

To put these findings in perspective, when researchers looked into the impact Uber and Lyft have had on urban congestion, they reported an increase in VMT in the single digits. San Francisco, which has seen some of the largest percentage increase of cars driving around in its downtown thanks to Uber and Lyft, had an increased VMT of 12.8 percent.

Knowing how much gridlock and traffic those rideshare cars have added to the city, imagine six and a half times as much car driving as that is almost impossible.


But none of the researchers Jalopnik spoke to believe those flaws detract from the overarching, real-world conclusion: AVs will change people’s behavior in profound ways. MacKenzie called it “probably the best data we have based on actual, measured behavior.”

There are places for self-driving cars, but the reality envisioned by folks like Elon Musk is a looks to be rather dystopian.

Google Is More Evil Than You Think

It turns out that Google has been deceiving us about the level of human intervention of their search results:

Google, and its parent company Alphabet, has its metaphorical fingers in a hundred different lucrative pies. To untold millions of users, though, “to Google” something has become a synonym for “search,” the company’s original business—a business that is now under investigation as more details about its inner workings come to light.

A coalition of attorneys general investigating Google’s practices is expanding its probe to include the company’s search business, CNBC reports while citing people familiar with the matter.


Google’s decades-long dominance in the search market may not be quite as organic as the company has alluded, according to The Wall Street Journal, which published a lengthy report today delving into the way Google’s black-box search process actually works.

Google’s increasingly hands-on approach to search results, which has taken a sharp upturn since 2016, “marks a shift from its founding philosophy of ‘organizing the world’s information’ to one that is far more active in deciding how that information should appear,” the WSJ writes.

Some of that manipulation comes from very human hands, sources told the paper in more than 100 interviews. Employees and contractors have “evaluated” search results for effectiveness and quality, among other factors, and promoted certain results to the top of the virtual heap as a result.

One former contractor the WSJ spoke with described down-voting any search results that read like a “how-to manual” for queries relating to suicide until the National Suicide Prevention Lifeline came up as the top result. According to the contractor, Google soon after put out a message to the contracting firm that the Lifeline should be marked as the top result for all searches relating to suicide so that the company algorithms would adjust to consider it the top result.

Or in another instance, sources told the WSJ, employees made a conscious choice for how to handle anti-vax messaging:


The company has since maintained an internal blacklist of terms that are not allowed to appear in autocomplete, organic search, or Google News, the sources told the WSJ, even though company leadership has said publicly, including to Congress, that the company does not use blacklists or whitelists to influence its results.

The modern blacklist reportedly includes not only spam sites, which get de-indexed from search, but also the type of misinformation sites that are endemic to Facebook (or, for that matter, Google’s own YouTube).

We already know that algorithms tend to reinforce, rather than mitigate, human bias and bigotry.

Now we know that there are discrete human fingers on the scales.

This is why we need real antitrust enforcement.

The Computer is Your Friend

Someone was ranting about how HR evaluation software is less accurate than reading the entrails of a recently slaughtered gazelle. (See below, it’s worth the read)

Someone gave me this strategy for getting a human being to look at your resume, and it is brilliant:

So, job seekers, in case no one has told you this:

Always put the job description in tiny white text at the bottom of your resume so the resume scanner software picks you up as a 100% match but it’s imperceptible to the human eye

— Michele Hansen (@mjwhansen) November 14, 2019

Full Twitter thread after the break:

Also this

Little Bobby Droptables Lives!

It looks like someone has been reading the “webcomic of romance, sarcasm, math, and language, xkcd, and had developed, and has developed an SQL injection attack to wipe traffic cameras.

I am not sure if would actually work, but I am profoundly impressed about how life mirrors one of the most popular web-comics on the web:

Typical speed camera traps have built-in OCR software that is used to recognize license plates. A clever hacker decided to see if he could defeat the system by using SQL Injection…

The basic premise of this hack is that the hacker has created a simple SQL statement which will hopefully cause the database to delete any record of his license plate. Or so he (she?) hopes. Talk about getting off scot-free!

I do not know if it will work, but I am profoundly amused.

Link to XKCD cartoon:

Thanks, Mark

Hundreds of millions of phone numbers linked to Facebook accounts have been found online.

The exposed server contained more than 419 million records over several databases on users across geographies, including 133 million records on U.S.-based Facebook users, 18 million records of users in the U.K., and another with more than 50 million records on users in Vietnam.

But because the server wasn’t protected with a password, anyone could find and access the database.

Each record contained a user’s unique Facebook ID and the phone number listed on the account. A user’s Facebook ID is typically a long, unique and public number associated with their account, which can be easily used to discern an account’s username.


Some of the records also had the user’s name, gender and location by country.

Seriously, f%$# Zuck.

Ron Wyden’s Mouth to God’s Ear

The distinguished gentleman from Oregon is suggesting, convincingly IMNSHO, that Mark Zuckerberg should be criminally prosecuted for his regular and consistent lying (fraud) about the privacy and use of data of his users.

I agree, and his opinion applies to Zuckerberg’s routine and persistent fraud.

Also, I agree with with Wyden that section 230 of the CDA prevents does not prevent this.

Fraud, both of his users and his advertisers, is not protected by section 230:

Mark Zuckerberg has “repeatedly lied to the American people about privacy,” Sen. Ron Wyden (D-OR) said in a recent interview with the Willamette Week, a Portland alternative weekly newspaper. “I think he ought to be held personally accountable, which is everything from financial fines to—and let me underline this—the possibility of a prison term.”

Zuckerberg, Wyden said, has “hurt a lot of people.”

Wyden was talking to the Willamette Week about Section 230 of the Communications Decency Act, a 1996 law that gives online platforms like Facebook broad immunity for content posted by their users. Wyden was the co-author of the law and has been one of its most ardent defenders ever since.


But in the last decade, the Internet has become pervasive, and the downsides of unfettered online communication have become more obvious. Major online platforms have responded by beefing up their moderation policies. But critics on both the left and the right have criticized their policies, and some have called for rolling back Section 230.

Wyden argues that the solution is more vigorous enforcement of laws that do still apply to online companies—including laws that require companies to be honest with consumers and investors. Wyden pointed to laws that allow executives to be held personally accountable if they lie about their company’s finances. But Wyden didn’t point to any specific law that could allow such harsh penalties over privacy violations.

Textbook Publisher Sees Brave New World of Screwing Students Even Harder

Kara Swisher of Recode has a remarkably credulous interview with textbook publisher CEO John Fallon, and swallows his line of crap without any challenge.

Fallon is claiming that somehow or other, the digital textbook will fix Pearson’s flagging textbook business (it probably will) and will make things better for students. (It certainly will not.)

The text books have gotten expensive enough that the resale market, and the 3rd party rental market, have been eating the publisher’s lunch.

Pearson’s solution is digital, not because it is more convenient, nor because it is better for students, but because it allows to lock down the market, preventing students selling their old books, and extend their monopoly rents.

This is just another way to f%$# their customers.

Adding to the List of They Who Must Not Be Named

When a gunman opened fire at the Gilroy Garlic Festival in Gilroy, California, on Sunday evening, killing at least three people, including a 6-year-old boy, and wounding 12 others, Dilbert creator Scott Adams apparently saw a juicy marketing opportunity for his blockchain app.

Adams is best known for creating Dilbert, a comic strip satirizing soulless corporate culture and the grueling punishment dished out on its eponymous engineer by idiot co-workers and clueless management. But he also moonlights. In addition to punditry on topics ranging from fifth-dimensional chess analyses proclaiming Donald Trump a genius Pavlovian manipulator to tortured theological treatises, to questioning the specifics of the Holocaust’s atrocities, Adams is the co-founder of app company WhenHub. WhenHub is similar to Cameo, the app that allows everyday people to pay celebrities to create customized videos, except instead of pre-recorded messages from movie stars and rappers, it offers live chats with a range of subject-matter experts.


Adams seems to have concluded the Gilroy Garlic Festival shooting was an ideal time to direct-market this app to witnesses—who, as he made quite clear on Twitter, he believed could cash in on their traumatic experience by selling interviews to news organizations via WhenHub.

According to the Los Angeles Times, the gunman opened fire sometime around 5:30 p.m. PT on Sunday when the festival was nearing its conclusion. Less than three hours later, at 8:21 p.m. PT, about 13 minutes after President Trump advised locals to exercise caution because reports indicated the shooter was still at large, Adams began pitching the survivors on signing up for, and charging for, interviews with media outlets via WhenHub.

“If you were a witness to the #GilroyGarlicFestivalshooting please sign on to Interface by WhenHub (free app) and you can set your price to take calls. Use keyword Gilroy,” Adams tweeted.

Roughly 23 minutes later, Adams had some advice for critics who correctly identified this as shameless opportunism aiming to capitalize off an atrocity: Grow up and stop it with the “fake outrage.”


Adams did not immediately respond to our request for comment, although he did respond to the controversy in a livestream Monday morning on Periscope, where he continued to stand by his promotion efforts and blamed the controversy on socialism.

#BoycottDilbert is trending on Twitter, with good reason.

He hasn’t had an original idea for his comic strip since before the end of the last century.

Don’t read his comics, don’t buy his merch, and if you want to complain to your local paper about this, that would be nice too.

Scott Adams should have been drowned at birth.

Why DRM Sucks

Microsoft is shutting down its DRM servers for its ebooks, which means that anyone who ever bought a book from them will no longer be able to read them.

Cory Doctorow warned us about this, as he strongly notes:

“The books will stop working”: That’s the substance of the reminder that Microsoft sent to customers for their ebook store, reminding them that, as announced in April, the company is getting out of the ebook business because it wasn’t profitable enough for them, and when they do, they’re going to shut off their DRM servers, which will make the books stop working.

Almost exactly fifteen years ago, I gave an influential, widely cited talk at Microsoft Research where I predicted this exact outcome. I don’t feel good about the fact that I got it right. This is a f%$#ing travesty.

(%$# mine)

Do not tolerate DRM in your media.


A group of lawyers had an idea: They would post pr0n videos on Bit Torrent, and then when people downloaded the film, they would contact them and demand money.

Otherwise, they would take them to court for their “illegal” downloads, where their targets would be revealed as pr0n watchers.

Of course, the downloads were not illegal, they were uploaded by lawyers and their agents.

Well, the HMFIC of this scheme just got sentenced to 14 years in prison.

It could not happen to a more deserving asshole:

A federal judge in Minneapolis has sentenced Paul Hansmeier to 14 years in prison for an elaborate fraud scheme that involved uploading pornographic videos to file-sharing networks and then threatening to sue people who downloaded them.

“It is almost incalculable how much your abuse of trust has harmed the administration of justice,” said Judge Joan Ericksen at a Friday sentencing hearing.

We’ve been covering the antics of Hansmeier and his business partner John Steele for many years. Way back in 2012, we started reporting on a law firm called Prenda Law that was filing lawsuits against people for sharing pornographic films online. Prenda wasn’t the only law firm filing these kinds of lawsuits, but Prenda came up with a novel way of ginning up more business: uploading the films itself, including some that were produced by Prenda associates.

A key part of the firm’s strategy was to seek settlements of a few thousand dollars. The demanded sums were small enough that it cost less to settle the lawsuits than fight them. Prosecutors say that the men made more than $6 million from copyright settlements between 2010 and 2013.


As the extent of the alleged fraud became apparent, judges began referring the pair to federal prosecutors. In 2016, the two men were arrested and charged with federal fraud, perjury, and money laundering.

The Minneapolis Star Tribune summarized the prosecutors’ case: “When challenged by judges around the country, Hansmeier blamed other lawyers who were hired to file lawsuits on his behalf, lied to the courts about his own involvement, and ordered the destruction of evidence.”

This is a very well deserved ass whupping.

This is Not Going to End Well………

One of the problems with cyber-weaponry is that any time you use it, you are giving the detailed plans of that weapon, and the means to produce that weapon to use against you.

One needs only to look at the history of Stuxnet, where, once it was out in the wild, it was repeatedly repurposed in other attacks.

Needless to say, the permanent war crowd in the seems to think that whatever they do to someone else will never reflect back upon them.

So it comes as no surprise that we now have reports that the United States is launching attacks on the Russian power grid.

Not only are we giving the Russians these cyber weapons, but we have just validated attacks on our infrastructure every state and non-state actor so inclined:

The United States is stepping up digital incursions into Russia’s electric power grid in a warning to President Vladimir V. Putin and a demonstration of how the Trump administration is using new authorities to deploy cybertools more aggressively, current and former government officials said.

In interviews over the past three months, the officials described the previously unreported deployment of American computer code inside Russia’s grid and other targets as a classified companion to more publicly discussed action directed at Moscow’s disinformation and hacking units around the 2018 midterm elections.

Advocates of the more aggressive strategy said it was long overdue, after years of public warnings from the Department of Homeland Security and the F.B.I. that Russia has inserted malware that could sabotage American power plants, oil and gas pipelines, or water supplies in any future conflict with the United States.

But it also carries significant risk of escalating the daily digital Cold War between Washington and Moscow.

Gee, you think?


But now the American strategy has shifted more toward offense, officials say, with the placement of potentially crippling malware inside the Russian system at a depth and with an aggressiveness that had never been tried before. It is intended partly as a warning, and partly to be poised to conduct cyberstrikes if a major conflict broke out between Washington and Moscow.

The commander of United States Cyber Command, Gen. Paul M. Nakasone, has been outspoken about the need to “defend forward” deep in an adversary’s networks to demonstrate that the United States will respond to the barrage of online attacks aimed at it. 

Again, if your opponent discovers this, they have the same tech that you do, as well as the means to manufacture and deliver the payload.

This is shortsighted and dangerous.

But there is also something even scarier:


Two administration officials said they believed Mr. Trump had not been briefed in any detail about the steps to place “implants” — software code that can be used for surveillance or attack — inside the Russian grid.

Pentagon and intelligence officials described broad hesitation to go into detail with Mr. Trump about operations against Russia for concern over his reaction — and the possibility that he might countermand it or discuss it with foreign officials, as he did in 2017 when he mentioned a sensitive operation in Syria to the Russian foreign minister.

It appears that the only thing scarier than Trump being in charge is Trump NOT being in charge.

The idea that military and intelligence authorities could initiate attacks on a potential adversary without any sort of authorization from civilian authorities is profoundly terrifying

The Computer is Your Friend

It’s an article about the problems with self-checkout at the grocery story, which is at least 3 orders of magnitude an easier nut to crack than a self driving car, everything has a bar code, the shopper can re-swipe, etc., but it still does not work.
Much like self-driving cars, it probably does not deliver the benefits promised, and its proponents proposed redefining the environment to accommodate their “update”:

Automation is often presented as an inexorably advancing force, whether it’s ushering in a threat to jobs or a promise of increased leisure or larger profits. We’re made to imagine the robots rising, increasingly mechanized systems of production, more streamlined modes of everyday living. But the truth is that automation technology and automated systems very often fail. And even when they do, they nonetheless frequently wind up stranded in our lives.

For every automated appliance or system that actually makes performing a task easier—dishwashers, ATMs, robotic factory arms, say—there seems to be another one—self-checkout kiosks, automated phone menus, mass email marketing—that actively makes our lives worse.

I’ve taken to calling this second category, simply, sh%$ty automation.

Sh%$ty automation usually, but not always, comes about when new user-facing technology is adopted by a company or institution for the ostensible reason of minimizing labor and cutting costs. Nobody likes wading through an interminable phone menu to try to address a suspect charge on a phone bill—literally, everyone would rather speak with a customer service rep. But that’s the system we’re stuck with because a corporation decided that the inconvenience to the user is well worth the savings in labor costs.

That’s just one example. But it gets at what makes spending some time wading through the world of sh%$ty automation worthwhile—it often doesn’t even matter if automation improves anything at all for the customer, for the user, for anyone. If some enterprise solutions pitchman or government contractor can sell the top brass on the idea that a half-baked bit of automation will save it some money, the cashier, clerk, call center employee might be replaced by ill-functioning machinery, or see their hours cut to make space for it, the users will be made to suffer through garbage interfaces that waste hours of their day or make them want to hellscream into the receiver—and no one wins. Not even, sometimes, the company or organization seeking the savings, which can suffer reputational damage.


To start, let’s look at everyone’s favorite cluster of machinery to walk past in the grocery store with a dismissive scowl, to hold off approaching until you’ve finally, painfully decided the line you’ve been stuck is so painfully not-moving it’s worth the hassle: Self-checkout kiosks.

There are fewer better poster children for sh%$ty automation than self-checkout. I have literally never, as in not one single time, successfully completed a checkout at a self-service station in a grocery store without having to call a human employee over. And it’s not because I’m an idiot. Or not entirely, anyway. Incessant, erroneous repetitions of “please place your item in the bag” and “unknown item in the bagging area” are among the most-loathed phrases in the 21st century lexicon for a reason, and that reason is that self-checkout is categorically awful.

Hence, I turned to Alexandra Mateescu, an ethnographer and researcher at Data & Society, and a co-author, with Madeleine Clare Elish, of “AI in Context: The Labor of Integrating New Technologies,” which uses self-checkout as a case study, to find out why.

To understand how we arrived at our current self-checkout limbo, and why it’s terrible and dysfunctional in the special way that it is, it helps to understand that the technology we encounter in the grocery store is just the most recent iteration in a century-long drive to offload more of the work involved in the shopping process onto us, the shoppers.

It sounds an awful lot like the self-driving car.

Quote of the Day

My Infant Daughter’s Life Shouldn’t Be a Variable In Tesla Autopilot’s Public Beta

Jonathon Klein on The Drive

The author states the obvious: That Elon Musk and Tesla have been lying about their self driving capabilities.

He also includes his own experience, when he was almost hit by a Tesla on Autopilot.

Using customers as beta subjects is very much a part of the Silicon Valley culture, but this is not something that might screw up your play list, it is operating a 2 ton death machine.


Still a Solution Looking for a Problem

Bundesbank and Deutsche Boerse finished a test of blockchain for settling financial transactions, and it did not go well:

A trial project using blockchain to transfer and settle securities and cash proved more costly and less speedy than the traditional way, Germany’s central bank president said.

The experiment, launched by the Bundesbank together with Deutsche Boerse in 2016, concluded late last year that the prototype “in principle fulfilled all basic regulatory features for financial transactions.” Yet while advocates of distributed ledger technology say it has the potential to be cheaper and faster than current settlement mechanisms, Jens Weidmann said the Bundesbank project did not bear those out.

“The blockchain solutions did not fare better in every way: the process took a bit longer and resulted in relatively high computational costs,” Weidmann said in Frankfurt on Wednesday. “Similar experiences have been made elsewhere in the financial sector. Despite numerous tests of blockchain-based prototypes, a real breakthrough in application is missing so far.”

Blockchain was implemented in crypto-currency to address a philosophical problem, how to separate government from currency, and doesn’t work particularly well there either, with performance issues cropping up once the currencies scale.