Tag: Software

The Difference Between Training and Credentialism

A recent study has shown that, “only 36% of Indian engineers can write compilable code.”

This is an indication that the Indian education system has a problem.

Even ignoring the basic question of when one should educate, and when one should just train, it appears that Indian degrees are largely about acquiring credentials.

I have noticed this trend both in the US education and employment, but it’s no surprise that this is more of an issue in India: Credentialism, or more accurately its caste system, is at its core a system of societally enforced credentials, has been in force for thousands of years:

Only 36% of software engineers in India can write compilable code based on measurements by an automated tool that is used across the world, the Indian skills assessment company Aspiring Minds says in a report.

The report is based on a sample of 36,800 from more than 500 colleges across India.

Aspiring Minds said it used the automated tool Automata which is a 60-minute test taken in a compiler integrated environment and rates candidates on programming ability, programming practices, run-time complexity and test case coverage.

It uses advanced artificial intelligence technology to automatically grade programming skills.

“We find that out of the two problems given per candidate, only 14% engineers are able to write compilable codes for both and only 22% write compilable code for exactly one problem,” the study said.
It further found that of the test subjects only 14.67% were employable by an IT services company.

When it came to writing fully functional code using the best practices for efficiency and writing, only 2.21% of the engineers studied made the grade.

I have heard this complaint for years from my IT friends, and now we have a study.

Rinse, Lather, Repeat: F-35 Edition

Development testing of the Lockheed Martin F-35 could be delayed by 12 months and cost another $1.7 billion, the US Government Accountability Office (GA0) warns in a new report published on 24 April.

In a report submitted to the US Congress, the GAO says that the F-35’s government managers at the joint programme office (JPO) have adopted an “optimistic” estimate for a five-month delay and $532 million cost overrun to complete Block 3F software, the fifth and final software release to support the 15-year-long system development and demonstration phase of the family of stealth fighters.

GAO’s analysis based on historical data suggests Block 3F testing won’t be complete until May 2018, or 12 months later than currently scheduled. The GAO’s anticipated or cost growth of $1.7 billion would raise overall development programme costs to $56.8 billion, $22.4 billion higher than the original budget at contract award in October 2001.

This sort of clusterf%$# has become so common for the F-35 that I’m not sure if it even qualifies as news these days.

It’s Cheap, It Works Better, Let’s Kill It

I just discovered that the Veterans Administration has a medical records system that it been running and evolving since the late 1970s.

It runs better than commercial systems, largely because doctors have been brought into the system early, and because it has an open architecture it can be easily adapted to the specific needs of specific departments and locations.

It’s also much cheaper than the commercial alternatives.

Of course, this means that it must be replaced by an over priced under performing system from a politically connected contractor:

Four decades ago, in 1977, a conspiracy began bubbling up from the basements of the vast network of hospitals belonging to the Veterans Administration. Across the country, software geeks and doctors were puzzling out how they could make medical care better with these new devices called personal computers. Working sometimes at night or in their spare time, they started to cobble together a system that helped doctors organize their prescriptions, their CAT scans and patient notes, and to share their experiences electronically to help improve care for veterans.

Within a few years, this band of altruistic docs and nerds—they called themselves “The Hardhats,” and sometimes “the conspiracy”—had built something totally new, a system that would transform medicine. Today, the medical-data revolution is taken for granted, and electronic health records are a multibillion-dollar industry. Back then, the whole idea was a novelty, even a threat. The VA pioneers were years ahead of their time. Their project was innovative, entrepreneurial and public-spirited—all those things the government wasn’t supposed to be.

Of course, the government tried to kill it.

Though the system has survived for decades, even topping the lists of the most effective and popular medical records systems, it’s now on the verge of being eliminated: The secretary of what is now the Department of Veterans Affairs has already said he wants the agency to switch over to a commercial system. An official decision is scheduled for July 1. Throwing it out and starting over will cost $16 billion, according to one estimate.

What happened? The story of the VA’s unique computer system—how the government actually managed to build a pioneering and effective medical data network, and then managed to neglect it to the point of irreparability—is emblematic of how politics can lead to the bungling of a vital, complex technology. As recently as last August, a Medscape survey of 15,000 physicians found that the VA system, called VistA, ranked as the most usable and useful medical records system, above hundreds of other commercial versions marketed by hotshot tech companies with powerful Washington lobbyists. Back in 2009, some of the architects of the Affordable Care Act saw VistA as a model for the transformation of American medical records and even floated giving it away to every doctor in America.

………

The Hardhats’ key insight—and the reason VistA still has such dedicated fans today—was that the system would work well only if they brought doctors into the loop as they built their new tools. In fact, it would be best if doctors actually helped build them. Pre-specified computer design might work for an airplane or a ship, but a hospital had hundreds of thousands of variable processes. You needed a “co-evolutionary loop between those using the system and the system you provide them,” says one of the early converts, mathematician Tom Munnecke, a polymathic entrepreneur and philanthropist who joined the VA hospital in Loma Linda, California, in 1978.

………

Munnecke, a leading Hardhat, remembers it as an exhilarating time. He used a PDP11/34 computer with 32 kilobytes of memory, and stored his programs, development work and his hospital’s database on a 5-megabyte disk the size of a personal pizza. One day, Munnecke and a colleague, George Timson, sat in a restaurant and sketched out a circular diagram on a paper place mat, a design for what initially would be called the Decentralized Hospital Computer Program, and later VistA. MUMPs computer language was at the center of the diagram, surrounded by a kernel of programs used by everyone at the VA, with applications floating around the fringes like electrons in an atom. MUMPS was a ludicrously simple coding language that could run with limited memory and great speed on a low-powered computer. The architecture of VistA was open, modular and decentralized. All around the edges, the apps flourished through the cooperation of computer scientists and doctors.

………

This is bitter fruit for many VistA fans. Some still say the system could be fixed for $200 million a year—the cost of a medium-sized hospital system’s EHR installation. “I don’t know if there even is an EHR out there with data comparable to the longitudinal data that VistA has about veterans, and we certainly do not want to throw that data out if a new EHR were to be used,” says Nancy Anthracite, a Hardhat and an infectious-disease physician.

Eventually, this system will be shut down, and replaced by a more expensive inferior commercial system, because that is how the government rolls these days.

It’s been heading in this direction for a while, but the institutionalization of dumbing down government agencies so as to require expensive contractors really got its start in Dick Cheney’s programs when he was Secretary of Defense, and it became an existential need in response to the Clinton administration’s “Reinventing Government” initiative.

It all comes down to normalizing corruption.

I am Surprised and Impressed

It appears that Wikileaks is exercising a bit more due diligence in its releases, as it is making the CIA hacks leaked to it available to the tech firms that were targeted before making them available to the general public:

Technology firms will get “exclusive access” to details of the CIA’s cyber-warfare programme, Wikileaks has said.

The anti-secrecy website has published thousands of the US spy agency’s secret documents, including what it says are the CIA’s hacking tools.

Founder Julian Assange said that, after some thought, he had decided to give the tech community further leaks first.

“Once the material is effectively disarmed, we will publish additional details,” Mr Assange said.

………

Mr Assange said that his organisation had “a lot more information on the cyber-weapons programme”.

He added that while Wikileaks maintained a neutral position on most of its leaks, in this case it did take a strong stance.

“We want to secure communications technology because, without it, journalists aren’t able to hold the state to account,” he said.

Mr Assange also claimed that the intelligence service had known for weeks that Wikileaks had access to the material and done nothing about it.

He also spoke more about the Umbrage programme, revealed in the first leaked documents.

He said that a whole section of the CIA is working on Umbrage, a system that attempts to trick people into thinking that they had been hacked by other groups or countries by collecting malware from other nation states, such as Russia.

“The technology is designed to be unaccountable,” he said.

He claimed that an anti-virus expert, who was not named, had come forward to say that he believed sophisticated malware that he had previously attributed to Iran, Russia and China, now looked like something that the CIA had developed.

This is why cyber security needs to be completely separate from any intelligence agency.

Otherwise, there is too much pressure to cover up the bugs so that the folks on the other side of the office spy on the rest of us.

Any hole which the CIA, NSA, DIA, or other TLA* can exploit can also be exploited by criminals, the Chinese, the Russians, terrorists, or the New England Patriots.

*Three letter acronym.

This Should Surprise No One

Uber has for years engaged in a worldwide program to deceive the authorities in markets where its low-cost ride-hailing service was resisted by law enforcement or, in some instances, had been banned.

The program, involving a tool called Greyball, uses data collected from the Uber app and other techniques to identify and circumvent officials who were trying to clamp down on the ride-hailing service. Uber used these methods to evade the authorities in cities like Boston, Paris and Las Vegas, and in countries like Australia, China and South Korea.

Greyball was part of a program called VTOS, short for “violation of terms of service,” which Uber created to root out people it thought were using or targeting its service improperly. The program, including Greyball, began as early as 2014 and remains in use, predominantly outside the United States. Greyball was approved by Uber’s legal team.

Greyball and the VTOS program were described to The New York Times by four current and former Uber employees, who also provided documents. The four spoke on the condition of anonymity because the tools and their use are confidential and because of fear of retaliation by Uber.

Uber’s use of Greyball was recorded on video in late 2014, when Erich England, a code enforcement inspector in Portland, Ore., tried to hail an Uber car downtown in a sting operation against the company.
At the time, Uber had just started its ride-hailing service in Portland without seeking permission from the city, which later declared the service illegal. To build a case against the company, officers like Mr. England posed as riders, opening the Uber app to hail a car and watching as miniature vehicles on the screen made their way toward the potential fares.
But unknown to Mr. England and other authorities, some of the digital cars they saw in the app did not represent actual vehicles. And the Uber drivers they were able to hail also quickly canceled. That was because Uber had tagged Mr. England and his colleagues — essentially Greyballing them as city officials — based on data collected from the app and in other ways. The company then served up a fake version of the app, populated with ghost cars, to evade capture.

………

This is where the VTOS program and the use of the Greyball tool came in. When Uber moved into a new city, it appointed a general manager to lead the charge. This person, using various technologies and techniques, would try to spot enforcement officers.

I am an engineer, not a lawyer, dammit,* but it appears to me that a this strategy has 4 letters written all over it, R.I.C.O.

I’d love to see someone get seriously medieval on their ass.

*I love it when I get to go all Dr. McCoy!

Well, This is an Interesting Take on the Problems in Indian IT

According to this report by the Indian web site Scroll.in. cheating is so endemic that many of the most prestigious schools in India are unable to do basic coding.

It appears to be the result of a toxic mix of entitlement and credentialism.

This is not to say that all Indian programmers are incompetent, though an Indian IT executive basically gave up on ⅔ of all IT grads in the country, which is a remarkably high failure rate for the elite institutions.

Reports of mass copying during school and college examinations in several states, including Bihar and Uttar Pradesh, are common. But a blog post by a computer science professor indicates that students at the prestigious Indian Institute of Technology, and other engineering colleges, indulge in it too.

Earlier this month, Dheeraj Sanghi, a professor at the Indraprastha Institute of Information Technology-Delhi, wrote a blog post on the quality of the country’s information technology engineers, which corporate recruiters also seem to be concerned about.

In the post titled, CS education is poor because of copying, Sanghi referred to a statement by Srinivas Kandula, chief executive of information technology major Capgemeni India, at a business event in Mumbai earlier this month.

At the event, Kandula said: “I am not very pessimistic, but it is a challenging task and I tend to believe that 60-65 per cent of them [IT recruits] are just not trainable.”

………

Speaking to this reporter, Sanghi said: “In many colleges, even in some of the IITs but to a lesser extent, students either copy the code for a programme from the net, or one student writes it, and the others copy. The code is tested in the laboratory. If it runs – and it does – the student is awarded marks even if the lines are not original.” He added that these shortcuts are adopted as early as the first semester.

………

In his blog post, he recounted that he was recently part of a selection committee to recruit programmers for a government department. He found that most applicants he interviewed, including those who had “several years of experience in industry”, could not perform a variety of tasks they ought to have learnt at engineering college. “These [were] all the programmes we ask our first semester students who have never programmed before,” he wrote.

………

But Indian Institutes of Technology have had their fair share of cheating scandals, some of which seem to have resulted in a cover-up.

For instance, in 2011, a computer science professor at the Indian Institute of Technology-Kharagpur, was suspended for reporting a variety of irregularities at the institution, including mass cheating in examinations. It led to a court case, which is still on. With the next hearing scheduled for Friday, the professor was reluctant to talk to this reporter but his lawyer Pranav Sachdeva said that one of the charges against his client was that “he spoke to the media about it”. Sachdeva added that the IIT had “tried to impose compulsory retirement [on his client] but the Delhi Hight Court put a stop to it”.

………

Even though engineering colleges can easily check copying if they wanted to by failing students who did not submit original programmes, there’s perhaps a valid reason why institutes hold back. “I know of one college which tried this,” wrote Sanghi in his blog. “Every single glass [pane] in all buildings were broken by the angry students.”

Any comments from people who have been through an IT education in India, or those who have experience working with Indian IT professionals would be appreciated.

Still a Few Bugs in the System

Some neuroscientists decided to see if the latest neuroscience tools could handle a simpler case than the human brain.

They chose a 40+ year old CPU, and they failed abysmally:

In 2014, the US announced a new effort to understand the brain. Soon, we would map every single connection within the brain, track the activity of individual neurons, and start to piece together some of the fundamental units of biological cognition. The program was named BRAIN (for Brain Research through Advancing Innovative Neurotechnologies), and it posited that we were on the verge of these breakthroughs because both imaging and analysis hardware were finally powerful enough to produce the necessary data, and we had the software and processing power to make sense of it.

But this week, PLoS Computational Biology published a cautionary note that suggests we may be getting ahead of ourselves. Part experiment, part polemic, a computer scientist got together with a biologist to apply the latest neurobiology approaches to a system we understand far more completely than the brain: a processor booting up the games Donkey Kong and Space Invaders. The results were about as awkward as you might expect, and they helped the researchers make their larger point: we may not understand the brain well enough to understand the brain.

On the surface, this may sound a bit ludicrous. But it gets at something fundamental to the nature of science. Science works on the basis of having models that can be used to make predictions. You can test those models and use the results to refine them. And you have to understand a system on at least some level to build those models in the first place.

 ………

That’s where Donkey Kong comes in.

Games on early Atari systems were powered by the 6502 processor, also found in the Apple I and Commodore 64. The two authors of the new paper (Eric Jonas and Konrad Paul Kording) decided to take this relatively simple processor and apply current neuroscience techniques to it, tracking its activity while loading these games. The 6502 is a good example because we can understand everything about the processor and use that to see how well the results match up. And, as they put it, “most scientists have at least behavioral-level experience with these classical video game systems.”

So they built upon the work of the Visual 6502 project, which got ahold of a batch of 6502s, decapped them, and imaged the circuitry within. This allowed the project to build an exact software simulator with which they could use to test neuroscience techniques. But it also enabled the researchers to perform a test of the field of “connectomics,” which tries to understand the brain by mapping all the connections of the cells within it.

To an extent, the fact that their simulator worked is a validation of the approach. But, at the same time, the chip is incredibly simple: there is only one type of transistor, as opposed to the countless number of specialized cells in the brain. And the algorithms used to analyze the connections only got the team so far; lots of human intervention was required as well. “Even with the whole-brain connectome,” Jonas and Kording conclude, “extracting hierarchical organization and understanding the nature of the underlying computation is incredibly difficult.”

Remember, in a microprocessor, a transistor is a transistor is a transistor, in the brain, neurons and ganglia vary from cell to cell.

This is a valid test of the software, the 6502 is arguably the most thoroughly understood CPU in existence, and Donkey Kong is arguably one of the best understood pieces of software in existence.

And they still could not do it on a  processor that can access only 64K of RAM.

We are much further from mapping the brain in any detail than is implied in the mainstream media reports.

Our Glorious Defense Procurement System

Yes, the most expensive defense procurement program in history, the F-35, is still not ready for prime time, with its maintenance software still unable to support aircraft operations:

Key software for the troubled F-35 fighter jet has been repeatedly delayed, causing problems for the British armed forces as they wait for Americans to iron out the bugs.

The F-35’s Autonomic Logistics Information System (ALIS) is the heart of the support offering bundled with the F-35 by its manufacturer, Lockheed Martin.

The latest version of ALIS – version 2.0.2 – has been delayed by at least six months and counting, according to the US Department of Defense’s Director of Operational Test and Evaluation (DOT&E), and units are instead stuck with version 2.0.1.3.

“It has yet to successfully complete testing and likely will not be fielded until early 2017,” according to the F-35 section of DOT&E’s annual report [PDF, 62 pages] to the US Congress. Version 2.0.2 will allow military personnel, rather than engine manufacturers and current maintenance contractors Pratt & Whitney, to read and act upon engine health data, but has not yet been deployed.

Although the release version of ALIS is intended to be version 3, with various beta releases bringing incremental extra capabilities until the release of v3, “delays in ALIS 2.0.2 development have also delayed the development of ALIS 3.0,” said DOT&E. This, warned the director, would result in key functionality being released as updates to v3.0 instead of being baked into the “final” software package deployed to F-35 customers – including the UK.

………

The 62-page report also revealed that the F-35 is temperamental when ground crew plug their Panasonic Toughbook diagnostic laptops into the aircraft and sync them: “In many instances, maintainers must attempt to synch several PMAs [portable maintenance aids – the laptops] with an aircraft before finding one that will successfully connect.”

………

Moreover, testing of ALIS up until 2016 took place on “representative hardware” instead of actual aircraft and ground base equipment. “The current closed environment does not adequately represent the variety of ways in which the Services operate ALIS in different environments,” DOR&E drily noted.

There was also a significant problem the first time that US personnel tried deploying F-35s and ALIS away from their home base:

…they had a great deal of difficulty using ALIS on the local base network. After several days of troubleshooting, Information Technology personnel and ALIS administrators determined that they had to change several settings on the base network at Mountain Home and in the web interface application (i.e., Internet Explorer) to permit users to log on to ALIS. One of these changes involved lowering the security setting on the base network, an action that may not be compatible with required cybersecurity and network protection standards in place.

ALIS is used by naval and air force personnel to determine in real time the state of the aircraft, view flight plans, and review each jet’s entire history from the moment it leaves the factory. It is an end-to-end management and planning system for pilots, maintainers and commanders alike – the ultimate vendor lock-in.

Controversially, it also sends each jet’s history back to the US, regardless of which country actually owns that aircraft – though Lockheed has promised it won’t read the pilots’ names.

(emphasis mine)

That “Vendor Lock-In” snark points to one of the more significant problems with the program, that the inmates (Lockheed-Martin) are running the asylum.

At every step of the process, LM has been allowed to design the system to ensure that it sits athwart all operations extracting a toll, and the results have been buggy under-performing and opaque systems.

Compare this to the latest Saab Gripen, which is on time and on budget, thanks largely to a reliance on modular software and the use off the shelf systems wherever possible.  (Here and here)

Lockheed used the same architecture for the F-22, and updates are tortuous and expensive.

Even if this plane achieves all of its performance goals, it will be unaffordable for many nations from a direct operating cost perspective as a result.

This is F%$#ing Inspired

Self-driving cars are all the rage right now, though I really don’t see the tech taking off for a very long time.

The problem is how to make an AI play nice with people on the road, who are inattentive, stupid, violent, vindictive, and frequently malicious.

And once you do, how do you test it?

Rolling it out on the road, with an operator in the drivers seat, is expensive.

Just the liability insurance would be insane.

Obviously, one solution, for the software at least, is to test it in a virtual environment, but this raises an important question: Where can one find a virtual reality that even comes close to mimicking the insanity that is humans driving cars?

Three Words: Grand Theft Auto:

Developers building self-driving cars can now take their AI agents for a spin in the simulated open world of Grand Theft Auto V – via OpenAI’s machine-learning playground, Universe.

The open-source MIT-licensed code gluing GTA V to Universe is maintained by Craig Quiter, who works for Otto – the Uber-owned startup that delivered 51,744 cans of Budweiser over 193km (120 miles) using a self-driving truck.

The software comes with a trained driving agent; all developers need is a copy of the game to get cracking. After that, programmers can swap out the demo AI model with their own agents to test their code and neural networks. Universe and Quiter’s integration code takes care of the fiddly interfacing with the game.

Video games new and old provide great training grounds for developing reinforcement learning agents, which learn through trial and error – or rather, trial and reward when things go right. OpenAI’s Universe was released in December, and is a wedge of open-source middleware that connects game controls and video displays to machine-learning agents so they can be trained in the virtual arenas.

Admittedly, GTA, with its hot rods, weapons, and rampant crime is only a pale shadow of commuting in Boston,* but putting self driving automobile software through its paces in the fictional burg of San Andreas, is a truly inspired reuse of code.

*No joke: I knew that it was time for me to leave New England when I screamed at someone for NOT cutting me off in a parking lot.

The Wonders of Capitalism and Competition

Google has decided to use the DMCA to lock out manufacturers who make hardware that is compatible with competing platforms from accessing Chromcast:

A closed-door unveiling of the forthcoming Google Home smart speaker platform included the nakedly anticompetitive news that vendors whose products support Amazon’s Echo will be blocked from integrating with Google’s own, rival platform.

These platforms are typically designed to allow their vendors to invoke Section 1201 of the DMCA, which makes it a felony to change their configurations in unauthorized ways, meaning that Google could convert its commercial preference (“devices either support Google Home or Amazon Echo, but not both”) into a legal right (“we can use the courts and the police to punish people who make products that let you expand your device’s range of features to support whichever platform you choose to use”).

This is our modern economy: The free market has been supplanted by parasitic rent seeking.

The DMCA is merely one example of this. It permeates our economy.

We see it in our IP regulations, our finance system, and our so-called “free trade” deals.

It’s money for nothing, and these rents are redirected toward our political process to buy more direct and indirect subsidies.

The only question is how it ends:  with an outbreak of sanity, or madame la guillotine.