I went to the O’Reilly AI conference this week.
I particularly went to sessions about finance. A very big problem with using machine learning for finance (and with machine learning in a lot of applications) is that, with the state of the art of machine learning, the reasoning isn’t very transparent. The point was made that transparency wasn’t just a problem for finance, it was a problem generally, for people trying to debug machine learning systems, too.
This is a big deal with using machine learning to evaluate applicants for loans. This process is heavily regulated, and when you deny a loan to someone, you are required by law to explain exactly why, to prove that you weren’t discriminating illegally. And if the reasoning is based on a neural net with thousands of interacting floating point values, that’s not human-readable.
One presentation was a guy who would try to coax an explanation out of the neural net by perturbing the inputs and seeing how that changes the result. He was reporting progress, but it still seemed to be on the drawing board and a long way from convincing the regulators it was ready for prime time.
Some applications, like fraud detection on credit card transactions, are lightly regulated, so neural nets can be applied. If you call someone on their cellphone to ask them if they really made a certain credit card charge, you don’t have to explain your reasoning or prove that illegal discrimination wasn’t a factor, so it’s really the wild west.
It turns out that when credit card transactions are flagged for fraud, only about 1% of them turn out to be really fraudulent, so they’re trying hard to reduce false positives. They found that by applying machine learning, they could reduce the number of false positives by a factor of two.
It turns out that machine learning can make huge mistakes. One case is “adversarial attacks”. They showed us two pictures, one of a school bus and one of a dog. Both were correctly identified by the machine learning, as a “school bus” and a “dog”. Then they modified a lot of the pixels very slightly, in a way known to confuse the machine learning, and showed us the pictures again, next to the original pictures. The change, even with the pictures side-by-side, was not noticeable to the human eye, but now machine learning identified both as an “ostrich”.
In a less malicious but more serious case, they showed us a machine learning that had been taught to distinguish stop signs from speed limit signs, and placing a yellow post-it pad on the stop sign (and not blocking any of the letters), confused the machine learning so that now it identified the stop sign as a speed limit sign.
Another application was for agriculture. The main way we apply herbicide is that we genetically engineer crops that can tolerate herbicide, and then douse the whole field with it. This one company was working on a rig that would be towed behind a tractor, and would optically be able to distinguish between crops and weeds, and squirt the herbice on only the weeds. They were working on being able to distinguish between different sorts of weeds, because some weeds, like “pigweed”, are resistant to most herbicides and would need application of special, more potent, herbicides that you really wouldn’t want to be applying indiscriminately.
One lecture was about a Stanford project to use machine learning to replace some datastructures, and he was claiming that a neural net could outperform a binary tree in some cases (since a binary tree would depend on performing log n ‘if’s that can’t be performed simultaneously, while the machine learning can be based on many simultaneous multiplies). It wasn’t just binary trees, that was just the main example he talked about, he felt that many of the data structures we are familiar with could be addressed, but in a lot of cases you would want to mix the machine learning with familiar, traditional datastructures.
Google had an interesting project. Machine learning experts are in very short supply, and there are different strategies for machine learning that have been published. Google has a project where they will trying several different machine learning strategies at once, see which ones are working best, and try those again, and iterate in a loop. The strategy consumes absolutely huge amounts of compute resources, but you get results comparable to if you had inaccessible machine learning experts.
One German was talking about the GDPR. His whole view was very European, and it was about these rules the European Union has adopted related to privacy and the responsibility of programmers and companies to uphold liberal democratic values. He talked about the 2016 US election as if no one would have voted for Trump except that Cambridge Analytica performed some sort of mass hypnosys on half the US population (I voted against Trump, but I have relatives who voted for him, I understand why they did, and Cambridge Analytica had NOTHING to do with it). I asked if implementing the GDPR would be constitutional in the US because a “right to be forgotten” would conflict with freedom of speech. He said Europe has freedom of speech (which I find preposterous) and gave arguments that the right to be forgotten was a good thing (which is plausible) without explaining how forcing websites to remove accurate statements did not in fact conflict with free speech. I didn’t argue with him any more because it would have been hogging the floor.
The most exciting presentation was by http://ctrl-labs.com, a company working on an enhanced brain-machine interface. Reading electrical signals directly from the brain is very difficult. Even if you put electrodes on the scalp, the signals in the brain are still the thickness of the skull away, and very weak, and the nerves on the scalp are much stronger and closer, so the noise to signal is just horrendous. To really get anywhere, you have to drill through the skull, which is not a terribly popular idea.
This company just has a bracelet that you wear on your wrist that detects signals in the nerves leading to your hand, and can read what you’re doing with your hand. The signals from the bracelet go to a neural net to figure out which nerve is which, and they can then figure out with a very high degree of accuracy what you’re telling your hand to do. They had a demo where the founder was wearing the bracelet, and there was a monitor with a picture of a hand in it, and as he moved his hand around, the hand cartoon in the monitor mimicked his actions. Then a friend grabbed his hand and held it in in fist, and the monitor continued to display what he was telling his hand to do, even though his hand was forced to be in a fist.
Then they had a very cool demo of an iPhone playing the video game “Asteroids”, which only had a few degrees of freedom, and someone had learned to play the game by moving one hand, but with training, they were able to send subtle signals well enough to play the game, while in fact their hand was completely motionless.
I asked how much CPU power was involved — if in fact there were 1000 GPU-assisted CPU’s in the cloud facilitating these demos. They said No. I asked if there was enough CPU power in a smartphone to do the demo. One guy said Yes, the other guy said there was enough CPU power in the average digital watch to do the demo. He said there were only 16 values read by the bracelet, it wasn’t that much data.
They claimed they could read the equivalent of a six-fingered hand, and a user could learn to manipulate a robot arm with more degrees of freedom than a human arm with this bracelet.
Another application was amputees. If you lose a hand in an accident, they’re pretty good at transplanting hands, but a problem is that since the nerves that control the hand are unused while you’re waiting for a hand to transplant, the nerves atrophy, making it difficult to learn to use the new hand once you have it. If you wear the bracelet and do things with it while you’re waiting for the new hand, the nerves won’t atrophy so much.
They’re going to start shipping bracelets in 2018, you can register on the website I gave above to be put on their waitlist.