Tag: Artificial Intelligence

Quote of the Day

After an audit of the algorithm, the resume screening company found that the algorithm found two factors to be most indicative of job performance: their name was Jared, and whether they played high school lacrosse.

Dave Gershgorn on Quartz about how algorithms reinforce bias.

I would argue that this is a feature, and not a bug.

When you look at the “Gig Economy”, and AI “Expert Systems,” the unspoking selling point is that they are, “money laundering for bias.”

Artificial Stupidity

Students have been given online short essay exams, and the kids have discovered that they are graded by artificial intelligence, and you can ace the test with two sentences and a word salad.

The problem here is not AI. The problem here is the tech bros trying to sell crap AI as gold:

On Monday, Dana Simmons came downstairs to find her 12-year-old son, Lazare, in tears. He’d completed the first assignment for his seventh-grade history class on Edgenuity, an online platform for virtual learning. He’d received a 50 out of 100. That wasn’t on a practice test — it was his real grade.

………

At first, Simmons tried to console her son. “I was like well, you know, some teachers grade really harshly at the beginning,” said Simmons, who is a history professor herself. Then, Lazare clarified that he’d received his grade less than a second after submitting his answers. A teacher couldn’t have read his response in that time, Simmons knew — her son was being graded by an algorithm.

Simmons watched Lazare complete more assignments. She looked at the correct answers, which Edgenuity revealed at the end. She surmised that Edgenuity’s AI was scanning for specific keywords that it expected to see in students’ answers. And she decided to game it.



Now, for every short-answer question, Lazare writes two long sentences followed by a disjointed list of keywords — anything that seems relevant to the question. “The questions are things like… ‘What was the advantage of Constantinople’s location for the power of the Byzantine empire,’” Simmons says. “So you go through, okay, what are the possible keywords that are associated with this? Wealth, caravan, ship, India, China, Middle East, he just threw all of those words in.”

………

Apparently, that “word salad” is enough to get a perfect grade on any short-answer question in an Edgenuity test.

Algorithm update. He cracked it: Two full sentences, followed by a word salad of all possibly applicable keywords. 100% on every assignment. Students on @EdgenuityInc, there's your ticket. He went from an F to an A+ without learning a thing.

— Dana Simmons (@DanaJSimmons) September 2, 2020

This is typical of what we are getting from tech these days.

It seems that it’s all the late David Graeber’s “Bullsh%$ Jobs.”

Another Bounty on US Troops? Yawn.

Have you heard the story of the boy who cried wolf?

Well now, the US State Security Apparatus is alleging that Iran paid bounties for attacks on US troops.

Coming next, unnamed sources evidence that LeBron James is paying bounties for attacks on US military personnel in Afghanistan:

Iran is reported to have paid bounties to a Taliban faction for killing US and coalition troops in Afghanistan, leading to six attacks last year including a suicide bombing at the US airbase in Bagram.

According to CNN, US intelligence assessed that Iran paid the bounties to the Haqqani network, for the Bagram attack on 11 December, which killed two civilians and injured more than 70 others, including two Americans.

The Pentagon decided not to take retaliatory action in the hope of preserving a peace deal the Trump administration agreed with the Taliban in February, the CNN report said. In January, less than a month after the Bagram attack, the US killed the Iranian Revolutionary Guard general Qassem Suleimani, in a drone strike in Baghdad, but that attack is not thought to have been a direct retaliation for Bagram.

………

The report comes nearly two months after allegations that Russia was paying bounties to Taliban fighters for killing Americans in Afghanistan. Donald Trump rejected those reports as a “hoax”, but the secretary of state, Mike Pompeo, confirmed he warned his Russian counterpart, Sergei Lavrov, that there would be “an enormous price to pay” if Moscow was paying such bounties. The Pentagon has said it will investigate the reports of Russian bounties but has so far not produced a conclusion to that investigation.

The credence that the national security press gives their sources in intelligence, who are literally professional liars, buggers the mind.

News You Can Use

Researchers at the University of Chicago have a project named Fawkes, which poisons images so that AI cannot be trained by scraping them from public websites while the images remain nearly unchanged to human eyes.

I’m thinking that Imgur should offer this as a filter:

Researchers at the University of Chicago’s Sand Lab have developed a technique for tweaking photos of people so that they sabotage facial-recognition systems.

The project, named Fawkes in reference to the mask in the V for Vendetta graphic novel and film depicting 16th century failed assassin Guy Fawkes, is described in a paper scheduled for presentation in August at the USENIX Security Symposium 2020.

Fawkes consists of software that runs an algorithm designed to “cloak” photos so they mistrain facial recognition systems, rendering them ineffective at identifying the depicted person. These “cloaks,” which AI researchers refer to as perturbations, are claimed to be robust enough to survive subsequent blurring and image compression.

The paper [PDF], titled, “Fawkes: Protecting Privacy against Unauthorized Deep Learning Models,” is co-authored by Shawn Shan, Emily Wenger, Jiayun Zhang, Huiying Li, Haitao Zheng, and Ben Zhao, all with the University of Chicago.

………

The boffins claim their pixel scrambling scheme provides greater than 95 per cent protection, regardless of whether facial recognition systems get trained via transfer learning or from scratch. They also say it provides about 80 per cent protection when clean, “uncloaked” images leak and get added to the training mix alongside altered snapshots.

They claim 100 per cent success at avoiding facial recognition matches using Microsoft’s Azure Face API, Amazon Rekognition, and Face++. Their tests involve cloaking a set of face photos and providing them as training data, then running uncloaked test images of the same person against the mistrained model.

………

The researchers have posted their Python code on GitHub, with instructions for users of Linux, macOS, and Windows. Interested individuals may wish to try cloaking publicly posted pictures of themselves so that if the snaps get scraped and used to train to a facial recognition system – as Clearview AI is said to have done – the pictures won’t be useful for identifying the people they depict. 

If someone comes up with a simple tool, it should be used on every social social media post.

AI Scams

In this case, it is food delivery robots known as Kiwibots, which, in addition to frequently blocking curb cuts in ways that threaten the lives of the disabled, lies about their use of artificial intelligence to navigate.

In reality, it uses remote operators in Columbia who are paid only $2.00/hour:

It seemed inevitable with the era of the autonomous car, ideas like the Kiwibots emerged. Small ostensibly autonomous vehicles that were in charge of food distribution, thus posing an alternative to courier services such as Glovo, Deliveroo or Uber Eats where deliveries are carried out by human messengers through the bike.

Everything seemed fantastic until it has been discovered that these vehicles have little of self-employed: an investigation has discovered that in reality these robots are remotely controlled by operators in Colombia who charge $2 per hour for this work.

………

This startup, called Kiwi Campus, launched small robots that looked like small carts with four wheels and a storage compartment at the top for orders. The robots became a sensation in the surroundings of that university, where the activity of the autonomous vehicles began.

………

The people in charge of the Kiwibots have several videos on their website that show how these messenger robots work: theoretically, the magic is provided by a complex artificial vision system that is able to recognize obstacles and detect when they can cross the street or not.

………

What was not shown to us as indicated in the San Francisco Chronicle is that they are remotely controlled by human operators who use the GPS sensors and cameras of these robots to send orders to the robots every 5 or 10 seconds.

On Kiwi Campus, they have recognized that there is indeed a part of human remote control, but for them, their service is a “parallel autonomy” system. The robots also circulate at a very reduced speed that goes from 1.6 to 2.4 km/h, which makes Kiwi workers have to pick up food orders from restaurants and go to the Kiwibots points of Departure to put the foods in the storage compartments of the robots and then make deliveries.

The model is unique, but it has more secrets than it might seem and much less autonomy than the robots seemed to raise – each of them costs $ 2,500 – initially. The ideal benefits from the low cost of the workforce that controls them: the operators that handle them in Colombia charge $2 per hour, a much lower cost than installing, for example, LIDAR systems – which would be difficult to integrate into these robots.

Seriously, why do we let fraudsters extract private profits from public space based on their lies?

We Now Know Where Microsoft® Bob® Works

Microsoft’s MSN network is attempting to replace human editors with artificial “Intelligence”.

Much fail ensues:

Microsoft’s decision to replace human journalists with robots has backfired, after the tech company’s artificial intelligence software illustrated a news story about racism with a photo of the wrong mixed-race member of the band Little Mix.

A week after the Guardian revealed plans to fire the human editors who run MSN.com and replace them with Microsoft’s artificial intelligence code, an early rollout of the software resulted in a story about the singer Jade Thirlwall’s personal reflections on racism being illustrated with a picture of her fellow band member Leigh-Anne Pinnock.

………

Microsoft does not carry out original reporting but employs human editors to select, edit and repurpose articles from news outlets, including the Guardian. Articles are then hosted on Microsoft’s website and the tech company shares advertising revenue with the original publishers. At the end of last month, Microsoft decided to fire hundreds of journalists in the middle of a pandemic and fully replace them with the artificial intelligence software.

………

In advance of the publication of this article, staff at MSN were told to expect a negative article in the Guardian about alleged racist bias in the artificial intelligence software that will soon take their jobs.

Because they are unable to stop the new robot editor selecting stories from external news sites such as the Guardian, the remaining human staff have been told to stay alert and delete a version of this article if the robot decides it is of interest and automatically publishes it on MSN.com. They have also been warned that even if they delete it, the robot editor may overrule them and attempt to publish it again.

Staff have already had to delete coverage criticising MSN for running the story about Little Mix with the wrong image after the AI software decided stories about the incident would interest MSN readers.

Epic fail.

A Feature, Not a Bug

It turns out that an algorithm used by health care providers to determine who is in need of enhanced care and monitoring discriminates against black black people.

Call me a conspiracy theorist, but I continue to think that algorithmic discrimination is actually one of the goals of this sort of AI tech, just like Airbmb listing, Facebook employment ads, etc:

A health care algorithm makes black patients substantially less likely than their white counterparts to receive important medical treatment. The major flaw affects millions of patients, and was just revealed in research published this week in the journal Science.

The study does not name the makers of the algorithm, but Ziad Obermeyer, an acting associate professor at the University of California, Berkeley, who worked on the study says “almost every large health care system” is using it, as well as institutions like insurers. Similar algorithms are produced by several different companies as well. “This is a systematic feature of the way pretty much everyone in the space approaches this problem,” he says.

The algorithm is used by health care providers to screen patients for “high-risk care management” intervention. Under this system, patients who have especially complex medical needs are automatically flagged by the algorithm. Once selected, they may receive additional care resources, like more attention from doctors. As the researchers note, the system is widely used around the United States, and for good reason. Extra benefits like dedicated nurses and more primary care appointments are costly for health care providers. The algorithm is used to predict which patients will benefit the most from extra assistance, allowing providers to focus their limited time and resources where they are most needed.

To make that prediction, the algorithm relies on data about how much it costs a care provider to treat a patient. In theory, this could act as a substitute for how sick a patient is. But by studying a dataset of patients, the authors of the Science study show that, because of unequal access to health care, black patients have much less spent on them for treatments than similarly sick white patients. The algorithm doesn’t account for this discrepancy, leading to a startlingly large racial bias against treatment for the black patients.

The effect was drastic. Currently, 17.7 percent of black patients receive the additional attention, the researchers found. If the disparity was remedied, that number would skyrocket to 46.5 percent of patients.

I really do believe that this is a deliberate business decision.  “It’s not racism, it’s just giving the cusomers what they want.”

NIMBY, Silicon Valley Edition

Silicon Valley techies really don’t want self-driving cars on their streets.

This is very different from the XKCD cartoon, where software experts disavow computerized voting for everyone, because it is most of the article is about the people wanting the testing to go on somewhere else:

Karen Brenchley [full disclosure, we dated in the mid 1980s] is a computer scientist with expertise in training artificial intelligence, but this longtime Silicon Valley resident has pangs of anxiety whenever she sees Waymo self-driving cars maneuver the streets near her home.

The former product manager, who has worked for Microsoft and Hewlett-Packard, wonders how engineers could teach the robocars operating on her tree-lined streets to make snap decisions, speed and slow with the flow of traffic and yield to pedestrians coming from the nearby park. She has asked her husband, an award-winning science-fiction author who doesn’t drive, to wear a shiny vest while cycling to ensure autonomous vehicles spot him in a rush of activity.

The problem isn’t that she doesn’t understand the technology. It’s that she does, and she knows how flawed nascent technology can be.

“I’m not skeptical long-term,” said Brenchley, who has lived in Silicon Valley for 30 years. “I don’t want to be the guinea pig. I don’t want my husband to be the guinea pig.”

Well, then who should be the guinea pig then?

If it’s not ready to share the roads with your bike-riding spouses or children they are not ready to share the roads with ANYONE‘S bike-riding spouses or children.

I expect to see commercial fusion power before we see truly autonomous cars outside of very limited roadways.

BTW, Elon Musk’s vision for a video only self driving scheme is even more hair-brained, as this Twitter thread demonstrates.   (after break)

Musk doesn’t care though, because he is one major facial scar away from being a Bond villain:

The Computer is Your Friend

It’s an article about the problems with self-checkout at the grocery story, which is at least 3 orders of magnitude an easier nut to crack than a self driving car, everything has a bar code, the shopper can re-swipe, etc., but it still does not work.
Much like self-driving cars, it probably does not deliver the benefits promised, and its proponents proposed redefining the environment to accommodate their “update”:

Automation is often presented as an inexorably advancing force, whether it’s ushering in a threat to jobs or a promise of increased leisure or larger profits. We’re made to imagine the robots rising, increasingly mechanized systems of production, more streamlined modes of everyday living. But the truth is that automation technology and automated systems very often fail. And even when they do, they nonetheless frequently wind up stranded in our lives.

For every automated appliance or system that actually makes performing a task easier—dishwashers, ATMs, robotic factory arms, say—there seems to be another one—self-checkout kiosks, automated phone menus, mass email marketing—that actively makes our lives worse.

I’ve taken to calling this second category, simply, sh%$ty automation.

Sh%$ty automation usually, but not always, comes about when new user-facing technology is adopted by a company or institution for the ostensible reason of minimizing labor and cutting costs. Nobody likes wading through an interminable phone menu to try to address a suspect charge on a phone bill—literally, everyone would rather speak with a customer service rep. But that’s the system we’re stuck with because a corporation decided that the inconvenience to the user is well worth the savings in labor costs.

That’s just one example. But it gets at what makes spending some time wading through the world of sh%$ty automation worthwhile—it often doesn’t even matter if automation improves anything at all for the customer, for the user, for anyone. If some enterprise solutions pitchman or government contractor can sell the top brass on the idea that a half-baked bit of automation will save it some money, the cashier, clerk, call center employee might be replaced by ill-functioning machinery, or see their hours cut to make space for it, the users will be made to suffer through garbage interfaces that waste hours of their day or make them want to hellscream into the receiver—and no one wins. Not even, sometimes, the company or organization seeking the savings, which can suffer reputational damage.

………

To start, let’s look at everyone’s favorite cluster of machinery to walk past in the grocery store with a dismissive scowl, to hold off approaching until you’ve finally, painfully decided the line you’ve been stuck is so painfully not-moving it’s worth the hassle: Self-checkout kiosks.

There are fewer better poster children for sh%$ty automation than self-checkout. I have literally never, as in not one single time, successfully completed a checkout at a self-service station in a grocery store without having to call a human employee over. And it’s not because I’m an idiot. Or not entirely, anyway. Incessant, erroneous repetitions of “please place your item in the bag” and “unknown item in the bagging area” are among the most-loathed phrases in the 21st century lexicon for a reason, and that reason is that self-checkout is categorically awful.

Hence, I turned to Alexandra Mateescu, an ethnographer and researcher at Data & Society, and a co-author, with Madeleine Clare Elish, of “AI in Context: The Labor of Integrating New Technologies,” which uses self-checkout as a case study, to find out why.

To understand how we arrived at our current self-checkout limbo, and why it’s terrible and dysfunctional in the special way that it is, it helps to understand that the technology we encounter in the grocery store is just the most recent iteration in a century-long drive to offload more of the work involved in the shopping process onto us, the shoppers.

It sounds an awful lot like the self-driving car.

Quote of the Day

My Infant Daughter’s Life Shouldn’t Be a Variable In Tesla Autopilot’s Public Beta

Jonathon Klein on The Drive

The author states the obvious: That Elon Musk and Tesla have been lying about their self driving capabilities.

He also includes his own experience, when he was almost hit by a Tesla on Autopilot.

Using customers as beta subjects is very much a part of the Silicon Valley culture, but this is not something that might screw up your play list, it is operating a 2 ton death machine.

Enough.

In the Annals of Stupidity

The idea that we should reconfigure cities to accommodate the limitations of robot cars is stupid.

It’s History Schmistory stupid:

Special report Behind the mostly fake “battle” about driverless cars (conventional versus autonomous is the one that captures all the headlines), there are several much more important scraps. One is over the future of the city: will a city be built around machines or people? How much will pedestrians have to sacrifice for the driverless car to succeed?

………

But the driverless car has to deal with pedestrians, as Christian Wolmar discussed at The Register last week: “The open spaces that cities like to encourage would end as the barricades go up. And foot movement would need to be enforced with Singapore-style authoritarianism.”

………

“The randomness of the environment such as children or wildlife cannot be dealt with by today’s technology,” admits Volvo’s director of autonomous driving, Markus Rothoff. The driverless car can’t hear you scream. Tests are not being conducted in real pedestrian-congested conditions.

The cheat is: just get rid of the people around cars, so you don’t need to solve these problems.

When people talk about the future of self-driving cars in the foreseeable future,  this is what they mean.

If this sounds far fetched, I will remind you that there is a recent historical precedent:  In the early 20th century, they restructured the city, and criminalized what had been ordinary walking:  They called it “Jaywalking”.

Do not underestimate the willingness of people who profit from driverless cars to restrict the rest of us, and to place the costs on society as a whole.

They have done it before.

Live in Obediant, Fear, Citizen

Amazon is routinely listening to your Alexa without your knowledge:

Tens of millions of people use smart speakers and their voice software to play games, find music or trawl for trivia. Millions more are reluctant to invite the devices and their powerful microphones into their homes out of concern that someone might be listening.

Sometimes, someone is.

Amazon.com Inc. employs thousands of people around the world to help improve the Alexa digital assistant powering its line of Echo speakers. The team listens to voice recordings captured in Echo owners’ homes and offices. The recordings are transcribed, annotated and then fed back into the software as part of an effort to eliminate gaps in Alexa’s understanding of human speech and help it better respond to commands.

The Alexa voice review process, described by seven people who have worked on the program, highlights the often-overlooked human role in training software algorithms. In marketing materials Amazon says Alexa “lives in the cloud and is always getting smarter.” But like many software tools built to learn from experience, humans are doing some of the teaching.

The team comprises a mix of contractors and full-time Amazon employees who work in outposts from Boston to Costa Rica, India and Romania, according to the people, who signed nondisclosure agreements barring them from speaking publicly about the program. They work nine hours a day, with each reviewer parsing as many as 1,000 audio clips per shift, according to two workers based at Amazon’s Bucharest office, which takes up the top three floors of the Globalworth building in the Romanian capital’s up-and-coming Pipera district. The modern facility stands out amid the crumbling infrastructure and bears no exterior sign advertising Amazon’s presence.

Well, that’s reassuring, isn’t it, Romanian hackers and Indian robocallers listening in on your home.

The work is mostly mundane. One worker in Boston said he mined accumulated voice data for specific utterances such as “Taylor Swift” and annotated them to indicate the searcher meant the musical artist. Occasionally the listeners pick up things Echo owners likely would rather stay private: a woman singing badly off key in the shower, say, or a child screaming for help. The teams use internal chat rooms to share files when they need help parsing a muddled word—or come across an amusing recording.

And then, you become a running gag at the next Christmas party.

If they want people in a petri dish so that they can tweak their algorithms, all they need to do is get their informed consent, pay them, and tell them when it is on or off, but that is inconvenient and expensive, so once again Eric Arthur Blair is spinning in his grave.

Clearly, Self Driving Cars are Just Around the Corner

It appears that the latest break-through for self driving cars is a proposal to outlaw pedestrians:

You’re crossing the street wrong.

That is essentially the argument some self-driving car boosters have fallen back on in the months after the first pedestrian death attributed to an autonomous vehicle and amid growing concerns that artificial intelligence capable of real-world driving is further away than many predicted just a few years ago.

In a line reminiscent of Steve Jobs’s famous defense of the iPhone 4’s flawed antennae—“Don’t hold it like that”—these technologists say the problem isn’t that self-driving cars don’t work, it’s that people act unpredictably.

“What we tell people is, ‘Please be lawful and please be considerate,’” says Andrew Ng, a well-known machine learning researcher who runs a venture fund that invests in AI-enabled companies, including self-driving startup Drive.AI. In other words: no jaywalking.

Whether self-driving cars can correctly identify and avoid pedestrians crossing streets has become a burning issue since March after an Uber self-driving car killed a woman in Arizona who was walking a bicycle across the street at night outside a designated crosswalk. The incident is still under investigation, but a preliminary report from federal safety regulators said the car’s sensors had detected the woman but its decision-making software discounted the sensor data, concluding it was likely a false positive.

………

With these timelines slipping, driverless proponents like Ng say there’s one surefire shortcut to getting self-driving cars on the streets sooner: persuade pedestrians to behave less erratically. If they use crosswalks, where there are contextual clues—pavement markings and stop lights—the software is more likely to identify them.

But to others the very fact that Ng is suggesting such a thing is a sign that today’s technology simply can’t deliver self-driving cars as originally envisioned. “The AI we would really need hasn’t yet arrived,” says Gary Marcus, a New York University professor of psychology who researches both human and artificial intelligence. He says Ng is “just redefining the goalposts to make the job easier,” and that if the only way we can achieve safe self-driving cars is to completely segregate them from human drivers and pedestrians, we already had such technology: trains.

Rodney Brooks, a well-known robotics researcher and an emeritus professor at the Massachusetts Institute of Technology, wrote in a blog post critical of Ng’s sentiments that “the great promise of self-driving cars has been that they will eliminate traffic deaths. Now [Ng] is saying that they will eliminate traffic deaths as long as all humans are trained to change their behavior? What just happened?”

We can now add hypocrisy to the other short comings of self driving car advocates.

Well, this is Profoundly NOT Reassuring

It appears that the robot Uber than ran down and killed a pedestrian saw the woman, but ignored her, because it had been programmed to.

Basically, Uber’s self-driving software is so crappy and has so many false positives that it was programmed to ignore actual human beings.

Uber is still Uber:

Uber has concluded the likely reason why one of its self-driving cars fatally struck a pedestrian earlier this year, according to tech outlet The Information. The car’s software recognized the victim, Elaine Herzberg, standing in the middle of the road, but decided it didn’t need to react right away, the outlet reported, citing two unnamed people briefed on the matter.

The reason, according to the publication, was how the car’s software was “tuned.” 

Here’s more from The Information:

Like other autonomous vehicle systems, Uber’s software has the ability to ignore “false positives,” or objects in its path that wouldn’t actually be a problem for the vehicle, such as a plastic bag floating over a road. In this case, Uber executives believe the company’s system was tuned so that it reacted less to such objects. But the tuning went too far, and the car didn’t react fast enough, one of these people said.

Let me translate this into English:  Uber put a 4000 pound death machine on the road with software that was incapable of determining the difference between a plastic bag and a human being.

This is not just reprehensible, it might very well be criminal.

OK, I May Have Been Too Dismissive of AI

It appears that scientists have managed to create a machine that can assemble IKEA furniture in about 20 minutes.

Of course, it2 a few years to do the programming:

Singaporean scientists have asked the question: “Can robots assemble an IKEA chair?” and come back with enough of a “Yes” that The Register feels it time to call for robots to take this job away from humans. Pleeeease, robots. Take this job away from us!

The boffins behind this breakthrough, assistant professor Pham Quang Cuong and a team of students, all of Nanyang Technological University, were cognizant of previous attempts at unpacking flat-pack kit that had used bespoke kit. So they instead used off-the-shelf robots and open-source code like the Point cloud library and gave them the job of assembling a “STEFAN” chair.

………

The robots didn’t do all the work themselves – assistant professor Pham and his students laid out the parts for the bots to find. But once unleashed, the machines did the job in 20 minutes and 19 seconds with over half of that time spent on computing the required actions. Actual build time was nine minutes, a little less than the average human according to IKEA.

Success came only at the fourth attempt, a failure rate that would put IKEA out of business. Problems on early attempts included the bots breaking some parts.

………

The team’s paper is here.  (paid subscription required)

No IKEA?

I’m feeling much better about the rise of the machines.

Not a Surprise

When I worked on Future Combat Systems in the early 200s, one of the things it was supposed to do was to save fuel because it used hybrid propulsion.

Because it was carrying a large number of batteries, it was also supposed to be able to spend an significant amount of time running on battery power in “silent watch mode”, where it would be hard to detect, because it would be operating without running its engine while its sensors took in information about its immediate vicinity and relayed it across the network.

It turned out that a “significant amount of time” ended up to be something less than an hour because of the power consumption of the sensors, computers, and communications systems.

It turns out something very similar is happening with self-driving cars:

For longtime residents of Pittsburgh, seeing self-driving cars built by Uber, Argo AI, and others roam their streets is nothing new. The city’s history with robot cars goes back to the late 1980s, when students at Carnegie Mellon University caught the occasional glimpse of a strange vehicle lumbering across campus. The bright-blue Chevy panel van, chugging along at slower than a walking pace, may not have looked like much. But NavLab 1 was slowly—very slowly—pioneering the age of autonomous driving.

Why did the researchers at CMU’s Robotics Institute use the van instead of, say, a Prius? First, this was a decade before Toyota started making the hybrid. Second, the NavLab (that’s Navigational Laboratory) was one of the first autonomous vehicles to carry its computers with it. They needed space, and lots of it. For the four researchers monitoring computer workstations, with their bulky cathode ray monitors stretched across a workbench. For the on-board supercomputer, camera, giant laser scanner, and air-conditioner. And for the four-cylinder gasoline engine that did nothing but generate electricity to keep the kit running.

Thirty years on, the companies carrying that early research into reality have proven that cars can indeed drive themselves, and now they’re swiveling to sort out the practical bits. Those include regulations, liability, security, business models, and turning prototypes into production vehicles, by miniaturizing the electronics and reducing that massive electricity draw.

Today’s self-drivers don’t need extra engines, but they still use terrific amounts of power to run their onboard sensors and do all the calculations needed to analyze the world and make driving decisions. And it’s becoming a problem.

A production car you can buy today, with just cameras and radar, generates something like 6 gigabytes of data every 30 seconds. It’s even more for a self-driver, with additional sensors like lidar. All the data needs to be combined, sorted, and turned into a robot-friendly picture of the world, with instructions on how to move through it. That takes huge computing power, which means huge electricity demands. Prototypes use around 2,500 watts, enough to light 40 incandescent light bulbs.

“To put such a system into a combustion-engined car doesn’t make any sense, because the fuel consumption will go up tremendously,” says Wilko Stark, Mercedes-Benz’s vice president of strategy. Switch over to electric cars, and that draw translates to reduced range, because power from the battery goes to the computers instead of the motors.

Don’t be depressed.  Self driving cars are only 10 years away, and will be just 10 years away for the next few decades, just like fusion and the Iranian nuclear arsenal.

Scam Jujitsu, Online Style

Someone has developed a chatbot that screws around with scammers.

If you have some free time, you might want to avail yourself of this:

Chatbots. They’re usually a waste of your time, so why not have them waste someone else’s instead? Better yet: why not have them waste an email scammer’s time.

That’s the premise behind Re:scam, an email chatbot operated by New Zealand cybersecurity firm Netsafe. Next time you get a dodgy email in your inbox, says Netsafe, forward it on to me@rescam.org, and a proxy email address will start replying to the scammer for you, doing its very utmost to waste their time. You can see a few sample dialogues in the video above, or check out a longer back-and-forth below.

 Works for me.

But What if it Gets Used for Evil ……… Oh ………Wait ……… It Already Has

Computer boffins in the land of Hobbits are using AI based chat bots to screw with scammers.

It’s a nice to see someone turning chat bots against the scammers:

Thousands of online scammers around the globe are being fooled by artificial intelligence bots posing as New Zealanders and created by the country’s internet watchdog to protect it from “phishing” scams.

Chatbots that use distinct New Zealand slang such as “aye” have been deployed by Netsafe in a bid to engage scammers in protracted email exchanges that waste their time, gather intelligence and lure them away from actual victims.

yber crime costs New Zealanders around NZ$250m annually. Computer programmers at Netsafe spent more than a year designing the bots as part of their Re:scam initiative, which went live on Wednesday.

Within 24 hours 6,000 scam emails had been sent to the Re:scam email address and there were 1000 active conversations taking place between scammers and chatbots.

So far, the longest exchange between a scammer and a chatbot pretending to be a New Zealander was 20 emails long.

The bots use humour, grammatical errors and local slang to make their “personas” believable, said Netsafe CEO Martin Cocker. As the programme engages in more fake conversations with scammers overseas, its vocabulary, intelligence and personality traits will grow.

Here’s hoping that the AIs will spend their time battling each other, and ,leave the rest of us alone.