Tag: Software

Good Point

Matt Stoller makes a very good point, that the penetration of “premier” cybersecurity firm SolarWinds by hackers,* was a direct consequence of the private equity looting ethos.

They did not play close attention to security (Passwords from movies, seriously), our-sourced work into Eastern Europe, where the FSB could recruit operatives in a day trip.

Security, you see, is not profitable, even if you are a cyber security firm:

Roughly a month ago, the premier cybersecurity firm FireEye warned authorities that it had been penetrated by Russian hackers, who made off with critical tools it used to secure the facilities of corporations and governments around the world.

The victims are the most important institutional power centers in America, from the FBI to the Department of Treasury to the Department of Commerce, as well as private sector giants Cisco Systems, Intel, Nvidia, accounting giant Deloitte, California hospitals, and thousands of others. As more information comes out about what happened, the situation looks worse and worse. Russians got access to Microsoft’s source code and into the Federal agency overseeing America’s nuclear stockpile. They may have inserted code into the American electrical grid, or acquired sensitive tax information or important technical and political secrets.

………

And that makes this hack quite scary, even if we don’t see the effect right now. Mark Warner, one of the smarter Democratic Senators and the top Democrat on the Intelligence Committee, said “This is looking much, much worse than I first feared,” also noting “The size of it keeps expanding.” Political leaders are considering reprisals against Russia, though it’s likely they will not engage in much retaliation we can see on the surface. It’s the biggest hack since 2016, when an unidentified group stole the National Security Agency’s “crown jewels” spy tools. It is, as Wired put it, a “historic mess.”

……….

The most interesting part of the cybersecurity problem is that it isn’t purely about government capacity at all; private sector corporations maintain critical infrastructure that is in the “battle space.” Private firms like Microsoft are being heavily scrutinized; I had one guest-post from last January on why the firm doesn’t manage its security problems particularly well, and another on how it is using its market power to monopolize the cybersecurity market with subpar products. And yet these companies have no actual public obligations, or at least, nothing formal. They are for-profit entities with little liability for the choices they make that might impose costs onto others.

………

All of which brings me to what I think is the most compelling part of this story. The point of entry for this major hack was not Microsoft, but a private equity-owned IT software firm called SolarWinds. This company’s products are dominant in their niche; 425 out of the Fortune 500 use SolarWinds. As Reuters reported about the last investor call in October, the CEO told analysts that “there was not a database or an IT deployment model out there to which [they] did not provide some level of monitoring or management.” While there is competition in this market, SolarWinds does have market power. IT systems are hard to migrate from, and this lock-in effect means that customers will tolerate price hikes or quality degradation rather than change providers. And it does have a large market share; as the CEO put it, “We manage everyone’s network gear.”

SolarWinds sells a network management package called Orion, and it was through Orion that the Russians invaded these systems, putting malware into updates that the company sent to clients. Now, Russian hackers are extremely sophisticated sleuths, but it didn’t take a genius to hack this company. It’s not just that criminals traded information about how to hack SolarWinds systems; one security researcher alerted the company last year that “anyone could access SolarWinds’ update server by using the password “solarwinds123.’”

Using passwords ripped form the movie Spaceballs is one thing, but it appears that lax security practice at the company was common, systemic, and longstanding. The company puts its engineering in the hands of cheaper Eastern Europe coders, where it’s easier for Russian engineers to penetrate their product development. SolarWinds didn’t bother to hire a senior official to focus on security until 2017, and then only after it was forced to do so by European regulations. Even then, SolarWinds CEO, Kevin Thompson, ignored the risk. As the New York Times noted, one security “adviser at SolarWinds, said he warned management that year that unless it took a more proactive approach to its internal security, a cybersecurity episode would be “catastrophic.” The executive in charge of security quit in frustration. Even after the hack, the company continued screwing up; SolarWinds didn’t even stop offering compromised software for several days after it was discovered.

………

And yet, not every software firm operates like SolarWinds. Most seek to make money, but few do so with such a combination of malevolence, greed, and idiocy. What makes SolarWinds different? The answer is the specific financial model that has invaded the software industry over the last fifteen years, a particularly virulent strain of recklessness typically called private equity.

………

In October, the Wall Street Journal profiled the man who owns SolarWinds, a Puerto Rican-born billionaire named Orlando Bravo of Thoma Bravo partners. Bravo’s PR game is solid; he was photographed beautifully, a slightly greying fit man with a blue shirt and off-white rugged pants in front of modern art, a giant vase and fireplace in the background of what is obviously a fantastically expensive apartment. Though it was mostly a puff piece of a silver fox billionaire, the article did describe Bravo’s business model.

………

As I put it at the time, Bravo’s business model is to buy niche software companies, combine them with competitors, offshore work, cut any cost he can, and raise prices. The investment thesis is clear: power. Software companies have immense pricing power over their customers, which means they can raise prices to locked-in customers, or degrade quality (which is the same thing in terms of the economics of the firm). As Robert Smith, one of his competitors in the software PE game, put it, “Software contracts are better than first-lien debt. You realize a company will not pay the interest payment on their first lien until after they pay their software maintenance or subscription fee. We get paid our money first. Who has the better credit? He can’t run his business without our software.”

………

Did this acquisition spree and corporate strategy work? Well that depends on your point of view; it certainly increased accounting profits. From a different perspective, however, the answer is no. Accounting profits masked that the corporate strategy was shifting risk such that the firm enabled a hack of the FBI and U.S. nuclear facilities. And from the user and employee perspective, the strategy was also problematic. It’s a little hard to tell, but if you look at software feedback comment forums, you’ll find a good number of IT pros dislike SolarWinds, seeing the firm as a financial project based on cobbling together random products from an endless set of acquisitions. (If you are at SolarWinds or another Thoma Bravo company, or use their products, send me a note on your experiences.)

………

It’s not clear to me that Bravo is liable for any of the damage that he caused, but he did make one mistake. Bravo got caught engaging in what very much looks like insider trading surrounding the hack. Here’s the Financial Times on what happened:

Private equity investors sold a $315m stake in SolarWinds to one of their own longstanding financial backers shortly before the US issued an emergency warning over a “nation-state” hack of one of the software company’s products.

The transaction reduced the exposure of Silver Lake and Thoma Bravo to the stricken software company days before its share price fell as vulnerabilities were discovered in a product that is used by multiple federal agencies and almost all Fortune 500 companies.

But the trade could prove embarrassing for Menlo Park-based Silver Lake and its rival Thoma Bravo, which rank among the biggest technology-focused private equity firms in the world.

………

In this case, however, possible insider trading really isn’t the problem. Though I hate the phrase, the real scandal isn’t what’s illegal, it’s what is legal. Bravo degraded the quality of software, which usually just means that people have to deal with stuff that doesn’t work very well, but in this case enabled a weird increase in geopolitical tensions and an espionage victory for a foreign adversary. It’s yet another example of what national security specialist Lucas Kunce notes is the mass transformation of other people’s risk into profit, all to the detriment of American society.

………

There are many ways to see this massive hack. It’s a geopolitical problem, a question of cybersecurity policy, and a legally ambiguous aggressive act by a foreign power. But in some ways it’s not that complex; the problem isn’t that Russians are good at hacking and U.S. defenses are weak, it’s that financiers in America make more money by sabotaging key infrastructure than by building it.

And they are celebrated for it. If Western nations had coherent political systems, the men responsible for this mess would be dragged in front of legislative committees and grilled over the business practices putting all of us at risk. Instead, five days ago, Pitchbook just gave out their Private Equity Awards, and named their “dealmaker of the year.”

Yes, it was Orlando Bravo.

We need to change the laws to hold these guys accountable.

As it currently stands, they borrow money, and then loot the companies, and then retreat behind the bulwark of the bankruptcy courts to avoid any responsibility for what they have done.

*According to “Knowledgeable Sources”, Russia, but no one is willing to go on the record, so YMMV.
Again, no one is willing to go on the record as to whether this was the FSB, or the GRU, or maybe it was the fault of those damn Eskimos.
The line is from Judgement at Nuremberg. It’s a great movie. Spencer Tracy, Marlene Dietrich, Burt Lancaster, Richard Widmark, Maximilian Schell, Judy Garland, Montgomery Clift, and a very young William Shatner. (Widmark says the line about the Eskimos.)

bbCode for Web Extensions (bbCodeWebEx) Version 0.3.0 Released

I have updated my Firefox addon, bbCode for Web Extensions (bbCodeWebEx), to revision 0.3.0.

Just to remind you, it adds a context menu to automate bbCode and HTML coding for blogging, discussion boards, and the like.

The new version was driven by the fact that the indescribably awful update to the Blogspot editor broke it.

In addition to that fix, there have been some updates:

I have also added the following:

  • Color picker for fonts in HTML and XHTML.
  • Added a new line token ( ~_~nl~_~) to allow users to make multi-line custom tags.

At some point, I’ll look into porting it to Chrome, which uses a similar technology for its add-ons, because it appears that senior Mozilla management is hell-bent killing off Firefox through stupid management tricks like, abandoning their core expertise for the latest shiny object, destroying their internal knowledge base, and looting the organization through excessive pay and benefits for senior management.

Better off Ted is Reality, Zoom Edition


Algorithms are a Dystopian future

A professor teaching Zoom classes, and he discovered that his head was being removed by the program. 

He called tech support, who trouble shot the problem, and (with said professor’s permission) related the account on Twitter. 

It turned out that he Zoom algorithm was choosing a globe for his head, and removing his actual head when it was doing that background trick thing.

One fact that will surprise anyone who has seen THAT episode of Better off Ted, is that the professor with the problem was Black. (See clip below)

It appears that much like Racist Republicans, Zoom cannot see color.

It cannot see it at all, and so the person is erased, which is one f%$# of a metaphor.

The optimists among us say that computer algorithms will eventually do away with racism.

The pessimists among us say that the computer algorithms will reinforce and extend racism.

The police, of course, are pulling out their wallets, because this gives them a excuse to discriminate, as is evidenced by the false arrest of a black man based on racist facial recognition.

Color me cynical.

House Passes Bill to Regulate the Internet of Sh%$

Given that our f%$#ing light bulbs are being hijacked to DDOS Instagram influencers, legislation to regulate the so-called “Internet of Things” is long overdue:

Though it doesn’t grab the same headline attention as the silly and pointless TikTok ban, the lack of security and privacy standards in the internet of things (IOT) is arguably a much bigger problem. TikTok is, after all, just one app, hoovering up consumer data in a way that’s not particularly different from the 45,000 other international apps, services, governments, and telecoms doing much the same thing. The IOT, in contrast, involves millions of feebly secured products being attached to home and business networks every day. Many also made in China, but featuring microphones and cameras.

Thanks to a laundry list of lazy companies, everything from your Barbie doll to your tea kettle is now hackable. Worse, these devices are now being quickly incorporated into some of the largest botnets ever built, resulting in devastating and historic DDoS attacks. In short: thanks to “internet of things” companies that prioritized profits over consumer privacy and the safety of the internet, we’re now facing a security and privacy dumpster fire that many experts believe will, sooner or later, result in some notably nasty results.

To that end, the House this week finally passed the Internet of Things Cybersecurity Improvement Act, which should finally bring some meaningful privacy and security standards to the internet of things (IOT). Cory Gardner, Mark Warner, and other lawmakers note the bill creates some baseline standards for security and privacy that must be consistently updated (what a novel idea), while prohibiting government agencies from using gear that doesn’t pass muster. It also includes some transparency requirements mandating that any vulnerabilities in IOT hardware are disseminated among agencies and the public quickly:

I would suggest some additional requirements, like length of support requirements, and liability for the manufacturers and/or vendors.

Artificial Stupidity

Students have been given online short essay exams, and the kids have discovered that they are graded by artificial intelligence, and you can ace the test with two sentences and a word salad.

The problem here is not AI. The problem here is the tech bros trying to sell crap AI as gold:

On Monday, Dana Simmons came downstairs to find her 12-year-old son, Lazare, in tears. He’d completed the first assignment for his seventh-grade history class on Edgenuity, an online platform for virtual learning. He’d received a 50 out of 100. That wasn’t on a practice test — it was his real grade.

………

At first, Simmons tried to console her son. “I was like well, you know, some teachers grade really harshly at the beginning,” said Simmons, who is a history professor herself. Then, Lazare clarified that he’d received his grade less than a second after submitting his answers. A teacher couldn’t have read his response in that time, Simmons knew — her son was being graded by an algorithm.

Simmons watched Lazare complete more assignments. She looked at the correct answers, which Edgenuity revealed at the end. She surmised that Edgenuity’s AI was scanning for specific keywords that it expected to see in students’ answers. And she decided to game it.



Now, for every short-answer question, Lazare writes two long sentences followed by a disjointed list of keywords — anything that seems relevant to the question. “The questions are things like… ‘What was the advantage of Constantinople’s location for the power of the Byzantine empire,’” Simmons says. “So you go through, okay, what are the possible keywords that are associated with this? Wealth, caravan, ship, India, China, Middle East, he just threw all of those words in.”

………

Apparently, that “word salad” is enough to get a perfect grade on any short-answer question in an Edgenuity test.

Algorithm update. He cracked it: Two full sentences, followed by a word salad of all possibly applicable keywords. 100% on every assignment. Students on @EdgenuityInc, there's your ticket. He went from an F to an A+ without learning a thing.

— Dana Simmons (@DanaJSimmons) September 2, 2020

This is typical of what we are getting from tech these days.

It seems that it’s all the late David Graeber’s “Bullsh%$ Jobs.”

Can You Say ……… Dystopian? Good — I Knew You Could

Google is looking at providing information services to employers to help them control their healthcare costs.

To put that into English, Google will collect enormous amounts of data about its clients employees in order flag people who are engaging in “Unhealthy Lifestyles” and mitigate employer exposure to healthcare costs.

Basically, they will spy on employees, and provide information that employers can use to meddle in their employees eat, when they sleep, etc.

And, though Google (Alphabet) will deny it, employers will use this data to fire employees who are flagged as healthcare cost risks.

If you want a picture of the Google’s future, imagine a boot stamping on a human face— forever:*

Without much fanfare, Verily, Alphabet’s life sciences unit, has launched Coefficient Insurance. It was only a matter of time before Google’s parent got into the health insurance business — in fact, one wonders what took it so long. With Google’s intimate knowledge of our daily patterns, contacts and dreams, the search engine group has for years had a far better picture of risk than any insurer.

That Coefficient Insurance, which is also backed by Swiss Re, would initially focus on the relatively arcane area of stop-loss insurance to protect employers from staff health cost volatility should not obscure its ambitious agenda for the rest of the industry. Thus, according to Verily’s senior management, it might soon start monitoring at-risk employees via their smartphones and even coaching them towards healthier lifestyles.

………

As with many services out of Silicon Valley, there is not much reflection about the probable reconfigurations of power among social groups — the sick and the healthy, the insured and the uninsured, the employers and the employees — that are likely to occur once the digital dust settles.

One would need to be extremely naive to believe that a more extensive digital surveillance system — in the workplace and, with Alphabet running the show, now also at home, in the car and wherever your smartphone takes you — is likely to benefit the weak and the destitute. Some good might come out of it — a healthier workplace, maybe — but we should also inquire who would bear the cost of this digital utopia.

………

Privacy law does not offer an adequate solution either. Under pressure from employers, most workers acquiesce to being monitored. This was obvious even before Alphabet’s foray into insurance, as plenty of smaller players have been pitching employers sophisticated workplace surveillance systems as a way of lowering healthcare costs.

If this does not scare the hell out of you, you have not been paying attention.

*Apologies to George Orwell.

Acknowledging Reality

The CEO of Ford, Jim Hackett, is walking back expectations on self-driving cars, suggesting that they be limited to dedicated roadways.

That has been the opinion of pretty much every expert whose paycheck is not dependent on selling the still distant technology:

Ford CEO Jim Hackett scaled back hopes about the company’s plans for self-driving cars this week, admitting that the first vehicles will have limits. “We overestimated the arrival of autonomous vehicles,” said Hackett, who once headed the company’s autonomous vehicle division, at a Detroit Economic Club event on Tuesday. While Ford still plans on launching its self-driving car fleet in 2021, Hackett added that “its applications will be narrow, what we call geo-fenced, because the problem is so complex.”

Hackett’s announcement comes nearly six months after its CEO of autonomous vehicles, Sherif Markaby, detailed plans for the company’s self-driving car service in a Medium post. The company has invested over $4 billion in the technology’s development through 2023, including over $1 billion in Argo AI, an artificial intelligence company that is creating a virtual driver system. Ford is currently testing its self-driving vehicles in Miami, Washington, D.C. and Detroit.

Driving cars is literally the most difficult things that people do on a routine basis, and it is made all the more complex because it involves incredibly complex interactions with other human beings who do not truly understand the limits of the 1½+ ton death machines.

People who suggest that this is just around the corner are deluded or liars, or both.

Closing the Barn Door Before the Cow Leaves

Nevada is is dropping the vote tabulation system that failed so ignominiously in Iowa.

All things considered, I’d go after Shadow, Inc. for a refund:

The Nevada Democratic Party said Tuesday that it will not use the app at the center of the technical difficulties causing delayed results in Iowa’s caucuses.

“NV Dems can confidently say that what happened in the Iowa caucus last night will not happen in Nevada on February 22nd,” the state party’s chairman, William McCurdy II, said in a statement. “We will not be employing the same app or vendor used in the Iowa caucus.”

The app was developed by Shadow, a software company in Denver. Representatives for the firm didn’t immediately respond to a request for comment.

Whoever at the DNC decided to push these guys on various state parties should be fired ……… Out of a cannon ……… and into the sun.

Same Sh%$ Different Name

One of the selling points of the F-35 Lightning II is its prognostics based maintenance system.

Unfortunately, this has turned into a completely non-functional sh%$ show.

In response, Lockheed and the Pentagon have given the system a new name, and started back at square one on the software.

To quote Albert Einstein, “The definition of insanity is doing the same thing over and over again, but expecting different results.”

The US military is dumping its Autonomous Logistics Information System (ALIS) in favour of ODIN as it tries to break with the complex past of its ailing F-35 fighter jet maintenance IT suite.

ALIS is the software suite that comes bundled with the F-35 fighter jet. A Lockheed Martin product, ALIS is intended to be a proactive maintenance suite: it tracks the health of each jet, tells supply systems when to order parts and tells maintainers what needs doing and when.

At least, that was the theory. Instead the all-encompassing suite has become so unwieldy and problem-ridden that the US armed forces are ditching it in favour of a new thing called ODIN, or Operational Data Integrated Network.

………

Far from meeting its originally envisioned role, ALIS was so bad that the US Government Accountability Office, an auditor similar to Britain’s National Audit Office, reckoned one US Air Force unit wasted 45,000 working hours per year working around ALIS’s shortcomings. In 2018, US Marine Corps station Beaufort was suffering spare part shortages of up to two years, thanks to ALIS making a hash of its spare part systems.

So, you have the same folks who made a complete dogs breakfast out of maintaining the F-35 are going to start from square one, with the same people, and make it all better.

Seriously?

Mom, They Are Being Evil Again!!!

On the heels of Google deciding to cripple ad blockers in their Chrome browser now that they have achieved a monopoly, now Google is rolling out a programming interface that will allow websites to see what programs are installed on your machine, which, among other things, deanonymize users, and possibly reveal a host of other personal data.

But Google wants to accommodate its advertisers, so f%$# the rest of us:

A nascent web API called getInstalledRelatedApps offers a glimpse of why online privacy remains such an uncertain proposition.

In development since 2015, Google has been experimenting with the API since the release of Chrome 59 in 2017. As its name suggests, it is designed to let web apps and sites determine whether a corresponding native app is installed on a user’s device.

The purpose of the API, as described in the proposed specification, sounds laudable. More and more, the docs state, users will have web apps and natives apps from the same source installed on the same device and as the apps’ feature sets converge and overlap, it will become important to be able to distinguish between the two, so users don’t receive two sets of notifications, for example.

But as spec editor and Google engineer Rayan Kanso observed in a discussion of the proposed browser plumbing, the initiative isn’t really about users so much as web and app publishers.

Late last month, after Kanso published notice of Google’s intent to officially support the API in a future version of Chrome, Daniel Bratell, a developer for the Opera browser, asked how this will help users.

“The mobile web already suffers from heavy handed attempts at getting web users to replace web sites with native apps and this mostly looks useful for funneling users from the open web to closed ecosystems,” Bratell said in a developer forum post.

Kanso made clear the primary focus of the proposal isn’t Chrome users.

“Although this isn’t an API that would directly benefit users, it indirectly benefits them through improved web experiences,” Kanso wrote. “We received very positive OT [off-topic] feedback from partners using this API, and the alternative is them using hacks to figure whether their native app is installed.”

………

That’s not say privacy concerns are ignored. On Wednesday, Google engineer Yoav Weiss joined the discussion to express concern about the API’s privacy implications.

“Knowing that specific apps were installed can contain valuable and potentially sensitive information about the user: income level, relationship status, sexual orientation, etc,” Weiss wrote, adding, “The collection of bits of answers to ‘Is app X installed?’ can be a powerful fingerprinting vector.”

………

And in a separate discussion Henri Sivonen, a Mozilla engineer, worried that the API might lead to more attempts to steer users away from the web and toward a native app, something websites like Reddit already try to do.

Google users are not the customer, they are product to be monetized.

Break them up.

Abolish the Patent Court

Since its founding in 1982, the U.S. Court of Appeals for the Federal Circuit, aka “The Patent Court” has been a morass of sloppy patent maximalist jurisprudence.

This is why the Supreme Court has been routinely overturning their rulings over the past few years.

Whenever the Supreme Court agrees to review a case from the patent court, there is some sort of reversal in the works, and likely a significant amount of shade thrown at back at the court.

Now SCOTUS will review one of the worst opinions of the Patent Court, Google v. Oracle, where it was determined that programming interfaces (APIs) were subject to copyright, which has the effect of making program interoperability unlawful:

Some big news out of the Supreme Court this morning, as it has agreed to hear the appeal in the never-ending Oracle v. Google lawsuit regarding whether or not copyright applies to APIs (the case is now captioned as Google v. Oracle, since it was Google asking the Supreme Court to hear the appeal). We’ve been covering the case and all its permutations for many years now, and it’s notable that the Supreme Court is going to consider both of the questions that Google petitioned over. Specifically:

  1. Whether copyright protection extends to a software interface. 
  2. Whether, as the jury found, petitioner’s use of a software interface in the context of creating a new computer program constitutes fair use. 

………

To me, as I always point out in this case, the key element will be getting the Supreme Court to recognize that an API is not software. Oracle and its supporters keep trying to insist that an API and executable code are one and the same, and I worry that the Supreme Court will not fully understand the differences, though I am sure that there will be compelling amici briefs trying to explain this point to them. 

It’s clear that SCOTUS is looking to slap down the patent court again.

If you are not sure what a API is, it is a set of specifications that describe how programs talk with each other.

For example, if you wanted the square route of a number, you might do it by sqrt(#) or  [#]squareroute.

They both mean exactly the same thing, but one will work with a program, and the other won’t.

Essentially, Oracle is claiming copyright of program compatibility, and the patent court swallowed it hook line and sinker.

Mistake Jet Update

Full rate production for the F-35 Lightning II has been delayed.

What can I say, this program has only been around for more than ¼ century, and that is just not enough time:

The F-35 Joint Strike Fighter full-rate production decision, which is slated for December, may be put off for up to 13 months because of delays with integrating the Joint Simulation Environment (JSE).

Pentagon chief weapons buyer Ellen Lord signed a program deviation report this week that documented the expected threshold breach in the milestone C full-rate production decision, she told reporters Oct. 18 during a Pentagon briefing.

“What this is a result of, and I follow this very carefully, is the fact that we are not making as quick progress with the Joint Simulation Environment integration of the F-35 into it,” Lord said. Integrating the JSE with the F-35 is “critical” for initial operational test and evaluation, she said. The JSE projects characteristics like weather, geography and range that allows test pilots to use the jet’s full capabilities against the full range of required threats and scenarios.

This simulator is not a pilot simulator.  It’s an software development environment to validate that the software actually works.

It doesn’t work.

Mark Madoff Zuckerberg

Given Facebook’s culture, and Zuckerberg’s complete lack of ethics, I am inclined to believe that this is an accurate characterization of how the social media giant hits its numbers:

Facebook now has a market capitalization approaching $600 billion, making it nominally one of the most valuable companies on earth. It’s a true business miracle: a company that was out of users in 2012 managed to find a wellspring of nearly infinite and sustained growth that has lasted it, so far, half of the way through 2019.

So what is that magical ingredient, that secret sauce, that “genius” trade secret, that turned an over-funded money-losing startup into one of America’s greatest business success stories? It’s one that Bernie Madoff would recognize instantly: fraud, in the form of fake accounts.

Old money goes out, and new money comes in to replace it. That’s how a traditional Ponzi scheme works. Madoff kept his going for decades, managing to attain the rank of Chairman of the NASDAQ while he was at it.

Zuckerberg’s version is slightly different, but only slightly: old users leave after getting bored, disgusted and distrustful, and new users come in to replace them. Except that as Mark’s friend and lieutenant, Sam Lessin told us, the “new users” part of the equation was already getting to be a problem in 2012. On October 26, Lessin, wrote, “we are running out of humans (and have run-out of valuable humans from an advertiser perspective).” At the time, it was far from clear that Facebook even had a viable business model, and according to Frontline, Sheryl Sandberg was panicking due to the company’s poor revenue numbers.

To balance it out and keep “growth” on the rise, all Facebook had to do was turn a blind eye. And did it ever.

In Singer v. Facebook, Inc.—a lawsuit filed in the Northern District of California alleging that Facebook has been telling advertisers that it can “reach” more people than actually exist in basically every major metropolitan area—the plaintiffs quote former Facebook employees, understandably identified only as Confidential Witnesses, as stating that Facebook’s “Potential Reach” statistic was a “made-up PR number” and “fluff.” Also, that “those who were responsible for ensuring the accuracy ‘did not give a shit.’” Another individual, “a former Operations Contractor with Facebook, stated that Facebook was not concerned with stopping duplicate or fake accounts.”

………

Yet signs that Mark’s fake account problem is no different than Madoff’s fake account statement problem are everywhere. Google Trends shows worldwide “Facebook” queries down 80% from their November 2012 peak. (Instagram doesn’t even come close to making up for the loss.) Mobile metrics measuring use of the Facebook mobile app are down.

And the company’s own disclosures about fake accounts stand out mostly for their internal inconsistency—one set of numbers, measured in percentages, is disclosed to the SEC, while another, with absolute figures, appears on its “transparency portal.” While they reveal a problem escalating at an alarming rate and are constantly being revised upward—Facebook claims that false accounts are at 5% and duplicate accounts at 11%, up from 1% and 6% respectively in Q2 2017—they don’t measure quite the same things, and are impossible to reconcile. At the end of 2017, Facebook decided to stop releasing those percentages on a quarterly basis, opting for an annual basis instead. Out of sight, out of mind.

………

What Facebook does say is this: its measurements, the ones subject to “significant judgment,” are taken from an undisclosed “limited sample of accounts.” How limited? That doesn’t matter, because “[w]e believe fake accounts are measured correctly within the limitations to our measurement systems” and “reporting fake accounts…may be a bad way to look at things.”

And how many fake accounts did Facebook report being created in Q2 2019? Only 2.2 billion, with a “B,” which is approximately the same as the number of active users Facebook would like us to believe that it has.

A comprehensive look back at Facebook’s disclosures suggests that of the company’s 12 billion total accounts ever created, about 10 billion are fake. And as many as 1 billion are probably active, if not more. (Facebook says that this estimate is “not based on any facts,” but much like the false statistics it provided to advertisers on video viewership, that too is a lie.)

So, fake accounts may be a bad way to look at things, as Facebook suggests—or they may be the key to the largest corporate fraud in history.

Advertisers pay Facebook on the assumption that the people viewing and clicking their ads are real. But that’s often not the case. Facebook has absolutely no incentive to solve the problem, it’s already in court over it, and its former employees are talking. From Mark’s vantage point, it’s raining free money. All he has to do to get advertisers to spend is convince the world that Facebook is huge and it’s only getting huger.

………

But I’m not wrong. Facebook is a real product, but like Enron, it’s also a scam, now the largest corporate scandal ever. It won’t release its data about the 2016 election, about fake accounts, or about anything material—and because Mark knows it’s a scam, he won’t agree to testify before the British parliament in a way that could require him to actually answer any substantive questions, as I did in June. And because Facebook is also a component of the S&P 500, countless people have an incentive to maintain the status quo.

So should we break up the tech companies and Facebook in particular? It’s already a campaign issue for the next presidential election. Elizabeth Warren says yes. Beto O’Rourke wants “stronger regulations.” Kamala Harris would rather talk about privacy. Everyone else—even Donald Trump—generally agrees that something needs to be done. Yet the unspoken issue at the center of it all remains: Mark is running a Ponzi scheme, but Wall Street, Congress, the Federal Trade Commission, the think tanks, and their associates haven’t figured it out.

………

The biggest problem with treating Facebook as a monopoly, as opposed to the byproduct of what Jesse Eisenger calls “The Chickensh%$ Club,” is that it wrongly affirms Mark’s infallibility and fails to see through him and his scheme, let alone the reality that he’s not even in control anymore because no one is.

Would it have helped to separate Madoff Securities LLC into one company per floor, or split up Enron by division? Probably not, but talking about it is Facebook’s dream come true. Because the question we should really be discussing is “How many years should Mark Zuckerberg and Sheryl Sandberg ultimately serve in prison?”

If this is true, and I do find the argument compelling, I would pay money to watch his being frog marched out of his Menlo Park headquarters in handcuffs.

Airbus and Boeing Juxtaposed

It turns out that much like its 737 MAX counterpart, the Airbus A321 NEO also has a pitch up issue.

There is an important difference though, Airbus actually spent the money to make sure that was well tested and thoroughly redundant, while Boeing did it on the cheap:

The European Aviation Safety Agency EASA has issued an Air Worthiness Directive (AD) to instruct operators of the Airbus A321neo of a Pitch instability issue.

EASA writes “excessive pitch attitude can occur in certain conditions and during specific manoeuvres. This condition, if not corrected, could result in reduced control of the aeroplane.

We analyze how this is similar or different to the Boeing 737 MAX pitch instability issues.

………

As the AD does not apply to the in-service A320neo, the issue must not be connected to the pitch instability which comes as a natural consequence of mounting the larger neo engines on the A320 series. It can be restricted to how the A321neo version of the ELACs handles the aircraft’s controls in an excessive pitch up condition.

Since publishing the article Airbus has provided us the following information:

The issue is an A321neo landing configuration at extreme aft CG conditions and below 100ft only issue, discovered by Airbus and reported to AESA. Violent maneuvers in for instance a go-around in these conditions can cause a pitch up which the pilots can counteract using their side-sticks. No FBW nose downs or similar is commanded, it’s just the FBW doesn’t neutralize the pitch-up (like FBW using the Airbus style flight laws are supposed to do), the pilots have to do it. Airbus has assisted AESA in issuing the AD which restricts the aft CG used in operational landings until the ELAC software is updated.

Our comment: The Airbus information explains why the issue is limited to the A321neo. The A321 has a different flap configuration than the A320/A319, giving a more nose-down approach angle (a lift curve with a transposed AoA vs. lift range). It seems this difference can set the condition for the pitch-up which the FBW at this point does not compensate for. The Pilots have to do it. The FBW will take away this pitch-up in a FBW software release available 3Q2020.

………

Like the 737 MAX, the A319/320/321neos are affected by the mounting of larger engines with their larger nacelles ahead of the center of gravity, Figure 1.


Figure 1. A321 shown on top of A321neo. The larger engine nacelles are marked with a violet color. Source: Airbus and Leeham Co.

………

There are several takeaways from the above:

  • As we have written in the MAX articles, pitch instabilities in certain parts of an airliner’s wide flight envelope are common.
  • It comes down to how these are addressed to produce a safe aircraft. In the case of the MAX and A320, software-based control logic is used, controlling the movements of the horizontal stabilizer and elevator.
  • The key is how these controls are designed, tested and implemented.
  • The original MAX implementation was inacceptably badly done. It relied on a single sensor, commanded unnecessary repeated nose-down trim commands and didn’t have any global limitation on its authority.
  • The Airbus version for the A321neo has a solid implementation based on adequate hardware/software redundancy and relevant limitations on its authority. But it can be improved (see our Airbus update on cause and fix).

Similar problems, and in one case they aren’t killing people, because they aren’t letting finance and marketing drive basic engineering decisions.

This is a Feature, Not a Bug


Elon Musk thinks that he is this.


He is actually this.


And the polling

It turns out that of all the names given to driver assistance technology, Autopilot is the one most likely to cause over-reliance on the tech.

This is not a surprise. Overselling the feature, and generally overselling the features of his car, has been central to the business plan for Tesla Motors.

Does the name “Autopilot” cause people to overestimate the abilities of Tesla’s driver-assistance technology? It’s a question that comes up in the Ars comments almost every time we write about the feature.

Critics warn that some customers will assume something called “Autopilot” is fully self-driving. Tesla’s defenders counter by pointing out that autopilot capabilities in planes aren’t fully autonomous. Pilots still have to monitor their operation and intervene if they have a problem, and Tesla’s Autopilot system is no different.

A new survey from the Insurance Institute for Highway Safety brings some valuable hard data to this debate. The group asked drivers questions about the capabilities of five advanced driver-assistance systems (ADAS). They identified the products only by their brand name—”Autopilot,” “Traffic Jam Assist,” “Super Cruise,” etc. Survey participants were not told which carmaker made each product, and they did not learn the capabilities of the products. There were 2,000 total respondents, but each was asked about only two out of five systems, leading to a few hundred responses for each product.

………

For example, 48 percent of drivers said that it was safe for a driver to take their hands off the wheel when Autopilot is active, compared with around 33 percent for ProPilot Assist and less than 30 percent for the other systems named. Six percent of drivers said it was safe to take a nap in a car with Autopilot, while only three percent said the same for other ADAS systems.

Tesla further compounds this issue by promising that full autonomous driving will be available in a matter of the next few months.

This, and Theranos, is what happens when the Silicon Valley, “Fake it Until You Make It,” is applied to the real world.

Why I am Short on Google

Because they have literally the worst technical support on the planet earth.

21 years in, the only way to get effective support from Google is to have friends who work there.  Going in without personal connections is like trying to get a liquor license from the Ottoman Empire. https://t.co/T1KvCR14cY

— Pinboard (@Pinboard) May 3, 2019

Without knowing someone, or sending a letter from a lawyer, it is effectively impossible to get help if you have a problem with their services.

If you are going to rely on a company for mission critical applications, and you are not only unable to reach someone competent, but you are completely unable to reach anyone, period.

Beyond self dealing, extracting monopoly rent, and behaviors that would be called stalking if it were not coming from Silicon Valley, they have nothing to offer.

Look Out Below

Boeing, signalling what might be an extended grounding, has announced that it is curtailing production of the 737 MAX:

Boeing’s decision Friday to reduce the production rate on the 737 MAX was a surprise in timing and scope.

This came so quickly and was steep, cutting production from 52 MAXes per month to 42. It comes on the heals [sic] that a second software problem was found, delaying submission of the MCAS software upgrade to the FAA for review and approval.

The production rate cut is effective in mid-April. This is lightning speed in this industry, where rate breaks, as changes are called, typically have 12-18 month lead times.

Boeing hasn’t announced what the second software problem is. LNA is told it is the interface between the MCAS upgrade and the Flight Control System, but specifics are lacking.

LNA interprets these combined events as indicative the MAX will be ground well past the Paris Air Show in June.

The impact to Boeing is going to be huge: customer compensation, deferred revenue, lost revenue, potentially canceled orders and potential lost orders in sales campaigns. The hit to the Boeing brand and impacts of multiple investigations won’t become clear for months to come.

Also, we are seeing airlines scrambling to lease aircraft to replace their grounded MAX airliners.

Boeing is in a world of hurt.

How can you tell if Mark Zuckerberg is lying?

That’s easy.

You can tell that he is lying if his lips are moving.

As such, I am highly dubious of any promise that makes, particularly if he is promising enhanced privacy:

If you click enough times through the website of Saudi Aramco, the largest oil producer in the world, you’ll reach a quiet section called “Addressing the climate challenge.” In this part of the website, the fossil fuel monolith claims, “Our contributions to the climate challenge are tangible expressions of our ethos, supported by company policies, of conducting our business in a way that addresses the climate challenge.” This is meaningless, of course — as is the announcement Mark Zuckerberg made today about his newfound “privacy-focused vision for social networking.” Don’t be fooled by either.

………

And so here we are: “As I think about the future of the internet, I believe a privacy-focused communications platform will become even more important than today’s open platforms,” Zuckerberg writes in his road-to-Damascus revelation about personal privacy. The roughly 3,000-word manifesto reads as though Facebook is fundamentally realigning itself as a privacy champion — a company that will no longer track what you read, buy, see, watch, and hear in order to sell companies the opportunity to intervene in your future acts. But, it turns out, the new “privacy-focused” Facebook involves only one change: the enabling of end-to-end encryption across the company’s instant messaging services. Such a tech shift would prevent anyone, even Facebook, outside of chat participants from reading your messages.

That’s it.

Although the move is laudable — and will be a boon for dissident Facebook chatters in countries where government surveillance is a real, perpetual risk — promising to someday soon forfeit to your ability to eavesdrop on over 2 billion people doesn’t exactly make you eligible for sainthood in 2019. It doesn’t help that Zuckerberg’s post is completely absent of details beyond a plan to implement these encryption changes “over the next few years” — which is particularly silly considering Facebook has yet to implement privacy features promised in the wake of its previous mega-scandals.

Not only has Zuckerberg issued similar mea culpas over the years, he has done so on something resembling an annual basis.

These promises have never correlated to meaningful changes in behavior.