Tag: Computer

The Gig Economy Strikes Back

A group of Uber and Lyft drivers serving Washington National Airport have taken to simultaneously turning off their apps to drive surge pricing to increase their fares:

Drivers for ride-hailing apps Lyft and Uber have organized for better pay through collective action – and not by unionizing.

Here’s how it works: a group of drivers who pick up passengers at Ronald Reagan Washington National Airport, outside the US capital, have been turning off their taxi apps simultaneously to influence the surge pricing algorithms used by the two companies.

A report published last week by local ABC affiliate WJLA-TV recounts how a group of 100-150 drivers all turned off their driver apps in sync – coordinated by an individual using an unidentified app – to create the false impression of a local driver shortage.

With the ride supply down as demand peaks, the taxi apps’ surge pricing algorithms kick in, offering higher rates to entice more drivers to come to the airport. Minutes later, once the price rises anywhere from $10 to $19 or so, the drivers sign back on and accept the fare at a level they find more reasonable.

This is why you should not do business with companies that treat their employees like crap.

Even ignoring the ethical issues, it is likely that those poorly treated employees will find a way to fight back, and you are likely to be the battlefield.

Mark Zuckerberg is a Lying Sack of Excrement, Part VMCMLXIX

This is not a surprise.

All evidence indicates that, at least until it got caught, Facebook did not care about how its data was used until it became a public relations debacle, because they still got their money:

Facebook knew about Cambridge Analytica’s dodgy data-gathering practices at least four months before they was exposed in news reports, according to internal FB emails.

Crucially, the staff memos contradict public assurances made by Facebook CEO Mark Zuckerberg as well as sworn testimony offered by the company.

Those emails remain under a court seal, at Facebook’s request, although the Attorney General of Washington DC, Karl Racine, is seeking to have them revealed to all as part of his legal battle against the antisocial media giant.

Racine’s motion to unseal [PDF] the files this month stated “an email exchange between Facebook employees discussing how Cambridge Analytica (and others) violated Facebook’s policies” includes sufficient detail to raise the question of whether Facebook has – yet again – given misleading or outright false statements.

The redacted request reads: “The jurisdictional facts in the document shows that as early as September 2015, a DC-based Facebook employee warned the company that Cambridge Analytica was a “[REDACTED]” asked other Facebook employees to “[REDACTED]” and received responses that Cambridge Analytica’s data-scraping practices were “[REDACTED]” with Facebook’s platform policy.”

It goes on: “The Document also indicates that months later in December 2015, on the same day an article was published by The Guardian on Cambridge Analytica, a Facebook employee reported that she had ‘[REDACTED].'”

The reason this is critical is because Facebook has always claimed it learned of Cambridge Analytica’s misuse of people’s profile information – data obtained via a third-party quiz app built by Aleksandr Kogan – from press reports. Zuckerberg said in a statement more than two years later: “In 2015, we learned from journalists at The Guardian that Kogan had shared data from his app with Cambridge Analytica. It is against our policies for developers to share data without people’s consent, so we immediately banned Kogan’s app from our platform.”

Zuck omitted, incidentally, that Facebook threatened to sue the newspaper if it published its story. Facebook also admitted today that its executives have claimed the same thing as their boss under oath – that the social network only learned about the data misuse from press reports.

………

The truth is that Facebook is a train wreck with executives encouraged to do whatever they wanted in order to secure Facebook’s position in the digital economy and bring in revenue, regardless of laws or ethics or morals or anything else.

Its work culture is fundamentally broken with top executives making it plain that the company will obfuscate, mislead, block and bully before they even consider telling the truth – and that culture attracts more of the same.

Even by the notoriously lax ethical standards of the tech industry, Facebook is a particularly bad actor.

Credit Where Credit is Due

  _   _
((___))
[ x x ]
    /
 (' ')
  (U)

   cDc

Beto O’Rorke is cool as f%$#.

It’s not the skateboarding, it’s the fact that he was a part of the grey hat hacker collective Cult of the Dead Cow, and he brought in the only female hacker member of the group.

So basically, he’s a part of internet history.

He’s Seriously chill.

Still, he has studiously avoided talking about his positions about almost everything.

If we nominate another blank slate who is devoted to the status quo, the next Republican President will make Donald Trump look like Ike Eisenhower.

Officer, Would You Like a Cup of Shut the F%$# up to Go along with Your Doughnut?

Waze is a navigation app, and it’s better than Google Maps, in part because its users can report road conditions and the like.

One of the road hazards that users can report are speed traps, and cops HATE that, because that’s how they make their money.

Well, now the NYPD has issued a cease and desist letter to Waze, who promptly told them to go Cheney themselves:

The popular traffic app Waze gathers user-submitted feedback to alert drivers to possible inconveniences they might experience on the road—inconveniences like getting stuck at DWI traffic point. Now, the NYPD reportedly has a message for Waze and its parent company Google: Snitches get stitches.

CBS New York obtained a cease and desist letter that it claims was sent by the NYPD to Google in the law enforcement agency insists the Waze app is creating a dangerous situation by alerting users of nearby checkpoints. According to the report, the letter states:

Individuals who post the locations of DWI checkpoints may be engaging in criminal conduct since such actions could be intentional attempts to prevent and/or impair the administration of the DWI laws and other relevant criminal and traffic laws.

The posting of such information for public consumption is irresponsible since it only serves to aid impaired and intoxicated drivers to evade checkpoints and encourage reckless driving. Revealing the location of checkpoints puts those drivers, their passengers, and the general public at risk.

Curiously, a link to the full letter on the CBS website is now broken. An NYPD spokesperson told Gizmodo in an email, “I can confirm the NYPD sent the letter.” When asked for comment, a Google representative told us, “Safety is a top priority when developing navigation features at Google. We believe that informing drivers about upcoming speed traps allows them to be more careful and make safer decisions when they’re on the road.”

There are a number of things that are wrong with what the police have done.

First, Waze does not report sobriety check-points, ir reports speed traps, traffic cams, and  the presence of police cars.

Second, this is a seriously chicken sh%$ move, but seriously chicken sh%$ moves are all a part of law enforcement mentality, which is why one occasionally hear porcine metaphors when referring to the local constabulary.

Modern Extortion, YouTube Style

Extortionists are targeting YouTube channels with copyright “strikes” to extort money:

In a terrible abuse of YouTube’s copyright system, a YouTuber is reporting that scammers are using the platform’s “three strike” system for extortion. After filing two false claims against ObbyRaidz, the scammers contacted him demanding cash to avoid a third – and the termination of his channel. Every week, millions of YouTubers upload content for pleasure and indeed profit, hoping to reach a wide audience with their topics of choice.

On occasion, these users run into trouble by using content to which they don’t own the copyrights, such as a music track or similar.

While these complaints can often be dealt with quickly and relatively amicably using YouTube’s Content ID system, allegedly-infringing users can also get a so-called ‘strike’ against their account. Get three of these and a carefully maintained channel, with countless hours of work behind it, can be rendered dead by YouTube.

As reported on many occasions, this system is open to all kinds of abuse but a situation highlighted by a YouTuber called ‘ObbyRaidz’ takes things to a horrible new level.

The YouTuber, who concentrates on Minecraft-related videos, reports that he’s received two bogus strikes on his account. While this is nothing new, it appears the strikes were deliberately malicious with longer-term plan to extort money from him.

………

While people should be protected from this kind of abuse, both from a copyright perspective and the crime of extortion, ObbyRaidz says he’s had zero luck in getting assistance from YouTube.

“It’s very unfortunate and YouTube has not done very much for me. I can’t get in contact with them. One of the appeals got denied,” he explains.

It’s the nature of Google that no matter what happens, you never ever get to contact a human being, so if they take you down, you are basically completely f%$#ed.

Tech support literally does not exist, and this is a core policy of Google, which means that any

As is noted at Naked Capitalism, “If your business depends on a platform, you don’t have a business.”

Why You Don’t Deal with Companies That Mistreat Their Employees

Because these employees have no incentive to deal honestly with either their employer, or with the customer.

Case in point (again) is Amazon, where employees were leaking internal data to dishonest vendors on their site:

Amazon.com Inc. is fighting a barrage of seller scams on its website, including firing several employees suspected of having helped supply independent merchants with inside information, according to people familiar with the company’s effort.

Amazon was investigating suspected data leaks and bribes of its employees, The Wall Street Journal reported in September. Since then, the company has dismissed several workers in the U.S. and India for allegedly inappropriately accessing internal data that was being misused by disreputable merchants, these people said.

Amazon in recent weeks also has deleted thousands of suspect reviews, restricted sellers’ access to customer data on its website and stifled some techniques that trick the site into surfacing products higher in search results, according to the people.

An Amazon spokeswoman said the company is aggressively pursuing those who are trying to harm sellers on its website, using tools including machine learning to block bad behavior before it happens.

Yep, AI will solve this, which is why there are no trolls and scammers on Twitter and Facebook.

The crackdown, however, hasn’t stopped some sellers from sabotaging rivals. A recent rash of merchants claim competitors are maliciously flagging products as being counterfeit or infringing trademarks, prompting Amazon to temporarily boot legitimate products from the site while it evaluates them.

Sellers also are buying Amazon wholesaler accounts on the black market to gain access to volumes of product listings, people familiar with the practice said. These accounts on Amazon’s Vendor Central system are designed to enable wholesalers to edit product listings to ensure they are marketed accurately. But some sellers misuse these accounts to alter rivals’ product pages, such as by changing photos to unrelated items, these people said.

………

Some sellers engage in a practice dubbed “brushing,” in which fake accounts use real addresses to place orders so they can leave positive reviews, according to people familiar with the matter. Amazon’s security team was sent scrambling late last year, when a customer wrote Chief Executive Jeff Bezos to complain of such a scam after a vibrator he didn’t order was sent to his address, one of the people said.

………

​Amazon is focusing part of the internal bribery investigation on India, a major alleged source of data misuse by Amazon employees, according to a person familiar with the effort.

Some Amazon employees in India and China who work with sellers in customer-support roles have said their ability to search an internal database for data such as specific product performance or trending keywords has been strictly limited, according to people familiar with the matter. Some in India also are no longer able to use their USB ports to download such data, some of the people said.

The issue is not China, or India, it is that employees of Amazon want to get their money, and get the f%$# out of Dodge.

If they saw the possibility of making a decent career, and a decent life through Amazon, they would consider losing their jobs there as a risk, but they don’t, and so the technological terror that Jeff Bezos has constructed will continue to be a highly problematic place.

As to actually fixing the problems by being a better employer, where’s the money in that?

Your Mouth to God’s Ear

There has been a ruling involving a small French advertising firm which could completely reshape online advertising.

Basically, the court ruled that consent to collect information could not be passed onto third parties though a contract under the European Union’s General Data Protection Regulation (GDPR) regulations.

If this stands, it will completely reshape internet advertising, IMNSHO for the better:

A ruling in late October against a little-known French adtech firm that popped up on the national data watchdog’s website earlier this month is causing ripples of excitement to run through privacy watchers in Europe who believe it signals the beginning of the end for creepy online ads.

The excitement is palpable.

Impressively so, given the dry CNIL decision against mobile “demand side platform” Vectaury was only published in the regulator’s native dense French legalese.

Here is the bombshell though: Consent through the @IABEurope framework is inherently invalid. Not because of a technical detail. Not because of an implementation aspect that could be fixed. No.
You cannot pass consent to another controller through a contractual relationship. BOOM pic.twitter.com/xMlNHJTKwl

— Robin Berjon (@robinberjon) November 16, 2018

………

In plainer English, this is being interpreted by data experts as the regulator stating that consent to processing personal data cannot be gained through a framework arrangement which bundles a number of uses behind a single “I agree” button that, when clicked, passes consent to partners via a contractual relationship.

………

The firm was harvesting a bunch of personal data (including people’s location and device IDs) on its partners’ mobile users via an SDK embedded in their apps, and receiving bids for these users’ eyeballs via another standard piece of the programmatic advertising pipe — ad exchanges and supply side platforms — which also get passed personal data so they can broadcast it widely via the online ad world’s real-time bidding (RTB) system. That’s to solicit potential advertisers’ bids for the attention of the individual app user… The wider the personal data gets spread, the more potential ad bids.

That scale is how programmatic works. It also looks horrible from a GDPR “privacy by design and default” standpoint.

This cuts to the core of the current advertising model, and Google and Facebook’s current dominance of online advertising.

It should get very interesting.

This Exceeds My Collection of Face Palm Icons

Nope, not enough

The keynote speaker at the The 15th International Conference on Advances in Computer Entertainment Technology is Steve Bannon.

Yes, that guy, the poster child for drinking yourself to death.

That racist former member of the Trump administration will keynote ACE 2018.

What the f%$# were they thinking?

It’s like inviting Vox Day to keynote a Worldcon.

This will not end well.

This is Seriously Cyberpunk, in a Seriously Dystopian Way

Next year, Amy Winehouse will conduct a worldwide tour, despite having died more than 7 years ago.

Dead celebrities touring as computer generated simulacrums really does sound like something straight out of William Gibson’s darkest visions:

A hologram of Amy Winehouse is set for a worldwide tour in 2019. A projection of the late singer will “perform” digitally remastered arrangements of her songs, backed by a live band, singers and what the production company Base Hologram calls “theatrical stagecraft”.

Winehouse’s father, Mitch, described the endeavour as a dream. “To see her perform again is something special that really can’t be put into words,” he said. “Our daughter’s music touched the lives of millions of people and it means everything that her legacy will continue in this innovative and groundbreaking way.”

Mitch Winehouse said the tour will raise money and awareness for the Amy Winehouse Foundation. The charity educates young people about drug and alcohol misuse, provides support for those at risk and supports the development of disadvantaged young people through music.

The show is expected to last 75 to 110 minutes.

This is profoundly creepy.

Google+ Still Has 500,000 Users?

Google discovered that its programming tools for Google+ allowed advertisers and programmers to access private data.

They sat on this information for months, and then, when threatened with exposure announced that they will be shuttering Google Plus:

Google exposed the private details of almost 500,000 Google+ users and then opted not to report the lapse, in part out of concern disclosure would trigger regulatory scrutiny and reputational damage, The Wall Street Journal reported Monday, citing people briefed on the matter and documents that discussed it. Shortly after the article was published, Google said it would close the Google+ social networking service to consumers.

The exposure was the result of a flaw in programming interfaces Google made available to developers of applications that interacted with users’ Google+ profiles, Google officials said in a post published after the WSJ report. From 2015 to March 2018, the APIs made it possible for developers to view profile information not marked as public, including full names, email addresses, birth dates, gender, profile photos, places lived, occupation, and relationship status. Data exposed didn’t include Google+ posts, messages, Google account data, phone numbers, or G Suite content. Some of the users affected included paying G Suite users.

Google Chief Executive Sundar Pichai knew of the glitch and the decision not to publicly disclose it, the WSJ reported. Based on a two-week test designed to measure the impact of the API bugs before they were fixed, Google analysts believe that data for 496,951 users was improperly exposed. According to the report:

Google:  That whole, “Don’t be evil,” thing is, “inoperative.”

BTW, I am aware of the irony present in my using a Google blogging platform, and (barely) monetizing said blog on Google™ Adsense™.

Copyright Trolling, Sony Edition

Sony Music Entertainment has been forced to abandon its claim that it owned 47 seconds of video of musician James Rhodes using his own piano to play music written by Johann Sebastian Bach.

Last week, Rhodes recorded a short video of himself playing a portion of Bach’s first Partita and posted it to Facebook. Bach died in 1750, so the music is obviously in the public domain. But that didn’t stop Sony from claiming the rights to the audio in Partita’s video.

“Your video matches 47 seconds of audio owned by Sony Music Entertainment,” said a notice Rhodes received on Facebook. Facebook responded by muting the audio in Rhodes’ video. Remarkably, when Rhodes disputed Sony’s claim, Sony stuck to its guns and denied the appeal. As far as we know, Sony hasn’t commented publicly on the dispute or explained why it continued to claim Rhodes’ music.

But whereas Facebook’s formal appeals process didn’t work for Rhodes, public shaming seems to have done the trick. Rhodes’ tweet on the topic got more than 2,000 retweets, and Rhodes also emailed senior Sony Music executives about the issue.

As one commenter noted:

Guys, let’s be reasonable here.

Without strong copyright enforcement, composers like Bach will have no incentive to produce new music.

Sony is just ensuring that Bach has the financial freedom to release his next album. Really they’re doing you a favor.

Like at DMCA Takedown Notices ……… On Acid

It appears that abuse and misuse of the DMCA, with its lawsuits against printer ink manufacturers and extortion by corrupt lawyers has not informed the people in Washington, DC, who want to legalize revenge hacking, which, of course, will be outsourced to incompetent and malicious contractors:

Imagine this: Facebook is set to release a slew of shiny new features designed to win back users and increase engagement. But before it can release its products, Renren (one of China’s Facebook clones) releases the same features across its platform, beating Facebook to the punch. Infuriated, Facebook security officials claim they know with near certainty that their plans were stolen by a hacker on behalf of the Chinese social-media giant. Some furious employees put in motion a plan to load a devastating malware attack on the hackers’ networks as payback.

Is that even legal? Can Facebook retaliate with a hack of its own? Under current U.S. law, the answer is no, but a growing number of legislators are attempting to change that. Yesterday, Rhode Island Democratic senator Sheldon Whitehouse became the most recent lawmaker to express support for revenge hacking.

“We ought to think hard about how and when to license hack-back authority so capable, responsible private-sector actors can deter foreign aggression,” Whitehouse said. “If [a major CEO] wanted permission to figure out how to hack back, I don’t think he’d know what agency’s door to knock on to actually give him an answer.”

Hacking back (also known as revenge hacking) involves a retaliatory response by a private company or an individual after they are attacked by a malicious actor. While anyone can monitor and enforce their own network and devices, the Computer Fraud and Abuse Act prevents people from going a step further and hacking into someone else’s network, even if they were hacked first. In his recent book, The Perfect Weapon, journalist David Sanger likens hacking back to a retaliatory home invader.

“It’s illegal, just as it’s illegal to break into the house of someone who robbed your house in order to retrieve your property,” Sanger writes.

The idea that legalizing hacking by Mark Zuckerberg and Jeff Bezos will make anyone any safer is a corrupt fiction.

They’re Bond villains, and granting them immunity to pull this crap will end in tragedy.

Tweet Stream of the Day

Comparison of Silicon Valley and the the Soviet Union is genius:

Things that happen in Silicon Valley and also the Soviet Union:

– waiting years to receive a car you ordered, to find that it’s of poor workmanship and quality

– promises of colonizing the solar system while you toil in drudgery day in, day out

— Anton Troynikov (@atroyn) July 5, 2018

– living five adults to a two room apartment

– being told you are constructing utopia while the system crumbles around you

— Anton Troynikov (@atroyn) July 5, 2018

– ‘totally not illegal taxi’ taxis by private citizens moonlighting to make ends meet

– everything slaved to the needs of the military-industrial complex

— Anton Troynikov (@atroyn) July 5, 2018

– mandatory workplace political education

– productivity largely falsified to satisfy appearance of sponsoring elites

— Anton Troynikov (@atroyn) July 5, 2018

It goes on, but it is well worth the read.

Someone is SO Getting Fired

The New York Times published a rather ordinary article about how various space launcher firms are trying to appeal to the hyper-rich.

What you may not notice if you click to go to the story is what the original URL is: https://www.nytimes.com/2018/06/09/style/pigs-in-spaaaaaace.html though it now redirects to https://www.nytimes.com/2018/06/09/style/axiom-space-travel.html.

I prefer the first url, since this is clearly a “Rich Pig” story, even if the author played is straight.

Kudos to whoever got this on the Times web site, if only for a few hours.

H/t Naked Capitalism for finding this bit of IT mischief.

Crap Websites

Because of #GDPR, USA Today decided to run a separate version of their website for EU users, which has all the tracking scripts and ads removed. The site seemed very fast, so I did a performance audit. How fast the internet could be without all the junk! 🙄
5.2MB → 500KB pic.twitter.com/xwSqqsQR3s

— Marcel Freinbichler (@fr3ino) May 26, 2018

It really is remarkable just how crapified and bloated websites have become.

It’s ads and trackers that crapify websites, and I’m sure that in the next few days, USA Today will succeed in coming up with a website just as bloated and slow as the US one.

It’s Completely Nuts

We’re going to include some fun games as hidden Easter eggs in Tesla S, X & 3. What do you think would be most fun in a car using the center touch screen?

— Elon Musk (@elonmusk) May 22, 2018

Seriously, Easter Eggs?

Elon Musk is giving Donald Trump a seriously run for his money in the insane tweet division.

Who in their right mind thinks that it wouldn’t be completely insane to put “Easter Eggs” in the critical systems of a 2 ton death machine?

No one, that’s who.

Elon Musk just tweeted thinks that it’s just ducky to put a f%$#ing Rick Roll in the display for his cars.

This is profoundly unhinged.

Well, this is Profoundly NOT Reassuring

It appears that the robot Uber than ran down and killed a pedestrian saw the woman, but ignored her, because it had been programmed to.

Basically, Uber’s self-driving software is so crappy and has so many false positives that it was programmed to ignore actual human beings.

Uber is still Uber:

Uber has concluded the likely reason why one of its self-driving cars fatally struck a pedestrian earlier this year, according to tech outlet The Information. The car’s software recognized the victim, Elaine Herzberg, standing in the middle of the road, but decided it didn’t need to react right away, the outlet reported, citing two unnamed people briefed on the matter.

The reason, according to the publication, was how the car’s software was “tuned.” 

Here’s more from The Information:

Like other autonomous vehicle systems, Uber’s software has the ability to ignore “false positives,” or objects in its path that wouldn’t actually be a problem for the vehicle, such as a plastic bag floating over a road. In this case, Uber executives believe the company’s system was tuned so that it reacted less to such objects. But the tuning went too far, and the car didn’t react fast enough, one of these people said.

Let me translate this into English:  Uber put a 4000 pound death machine on the road with software that was incapable of determining the difference between a plastic bag and a human being.

This is not just reprehensible, it might very well be criminal.