Copy
Is symbolic AI the 'dark matter' of AI - there's tons of it deployed around us and we can't measure it. Or is it far more insubstantial? And how could we know the truth?
View this email in your browser

Welcome to Import AI, a newsletter about artificial intelligence. Forward this email to give your chums an AI upgrade. Subscribe here.

Twitter analyzes its own systems for bias, finds bias, discusses bias, makes improvements:
...Twitter shows how tech companies might respond to criticism...
Back in October, 2020, Twitter came in for some criticism when people noticed its ML-based image cropping algorithm seemed to have some bias traits - like showing white people rather than black people in images. Twitter said it had tested for this stuff prior to deployment, but also acknowledged the problem (Import AI 217). Now, Twitter has done some more exhaustive testing and has published the results.

What has Twitter discovered? For certain pictures, the algorithm somewhat favored white individuals over black ones (4% favorability difference), and had a tendency to favor women over men (8%).

What has Twitter done: Twitter has already rolled out a new way to display photos on Twitter which basically uses less machine learning. It has also published the code behind its experiments to aid reproduction by others in the field.

Why this matters - compare this to other companies: Most companies deal with criticism by misdirection, gaslighting, or sometimes just ignoring things. It's very rare for companies to acknowledge problems and carry out meaningful technical analysis which they then publish (an earlier example is IBM which reacted to the 'Gender Shades' study in 2018 by acknowledging the problem and doing technical work in response).
Read more: Sharing learnings about our image cropping algorithm (Twitter blog).
Get the code here: Image Crop Analysis (Twitter Research).
Read more: Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency (arXiv).

###################################################

Googler: Here's the real history of Ethical AI at Google:
...How did Ethical AI work at Google, prior to the firings?...
Google recently dismissed the leads of its Ethical AI team (Timnit Gebru and Margaret Mitchell). Since then, the company has done relatively little to clarify what happened, and the actual history of the Ethical AI team (and its future) at Google is fairly opaque. At some point, all of this will likely be vigorously retconned by Google PR. So interested readers might want to read this article from a Googler about their perspective on the history of Ethical AI at the company…
  Read more:The History of Ethical AI at Google (Blake Lemoine, Medium).

###################################################

Want to know if federated learning works? Here's a multi-country medical AI test that'll tell us something useful:
...Privacy-preserving machine learning is going from a buzzword to reality…
Federated learning is an idea where you train a machine learning model in a distributed manner on various encrypted datasets. Though expensive and hard-to-do, many people think federated learning is the future of AI - especially for areas like medical AI, where it's very tricky to move healthcare data between institutions and countries, and easier to train distributed ML models on it.
  Now, a multi-country, multi-institution project wants to see if Federated Learning can work well for training ML models to do tumor segmentation on medical imagery. The project is called the Federated Tumor Segmentation Challenge and will run for several months this year, with results due to be announced in October. Some of the institutions involved include the (USA's) National Institutes of Health, the University of Pennsylvania, and the German Cancer Research Center.

What is the challenge doing? "The goals of the FeTS challenge are directly represented by the two included tasks: 1) the identification of the optimal weight aggregation approach towards the training of a consensus model that has gained knowledge via federated learning from multiple geographically distinct institutions, while their data are always retained within each institution, and 2) the federated evaluation of the generalizability of brain tumor segmentation models “in the wild”, i.e. on data from institutional distributions that were not part of the training datasets," the authors write.
Read more:The Federated Tumor Segmentation (FeTS) Challenge (arXiv).
Check out the competition details at the official website here.

###################################################

Why better AI means militaries will invest in "signature reduction":
...Computer vision doesn't work so well if you have a fake latex face...
The US military has a 60,000 person army that carries out domestic and foreign assignments under assumed identities and wearing disguises. This is part of a broad program called "signature reduction", according to Newsweek, which has an exclusive report that is worth reading. These people are a mixture of special forces operators who are deployed in the field, military intelligence specialists, and a clandestine army of people employed to post in forums and track down public information. The most interesting thing about this report is the mentions how signature reduction program contractors use prosthetics to change appearance and get past fingerprint readers:
  "They can age, change gender, and "increase body mass," as one classified contract says. And they can change fingerprints using a silicon sleeve that so snugly fits over a real hand it can't be detected, embedding altered fingerprints and even impregnated with the oils found in real skin.".

Why this matters (and how it relates to AI): AI has a lot of stuff that can compromise a spying operation - computer vision, various 're-identification' techniques, and so on. Things like "signature reduction" will help agents continue to operate, despite these AI capabilities. But it's going to get increasingly challenging - 'gait recognition', for example, is an aspect of AI that learns to find people based on how they walk (remember the end of 'The Usual Suspects'?). That's the kind of thing that can be got around with yet more prosthetics, but it all has a cost. I'm wondering when AI will get sufficiently good at unsupervised re-identification via a multitude of signatures that it obviates the effectiveness of certain 'signature reduction' programs? Send guesses to the usual email, if you'd like!
  Read more: Exclusive: Inside the Military's Secret Undercover Army (Newsweek).

###################################################

Facebook might build custom chips to support its recommendation systems:
...On "RecPipe" and what it implies…
Facebook loves recommendation systems. That's because recommenders are the kind of things that let Facebook figure out which ads, news stories, and other suggestions to show to its users (e.g, Facebook recently created a 12 trillion parameter deep learning recommendation system). In other words: at Facebook, recommendations mean money. Now, new research from Harvard and Facebook outlines a software system called "RecPipe", which lets people "jointly optimize recommendation quality and inference performance" for recommenders built on top of a variety of different hardware systems (CPUs, GPUs, accelerators, etc). By using RecPipe, Facebook says it can reduce latency by 4X on CPUs and 3X on CPU-GPU hardware systems.

Why RecPipe leads to specialized chips: In the paper, the researchers also design and simulate a tensor processing unit (TPU)-esque inference chip called RecPipeAccel (RPAccel). This chip can reduce taillatency by 3X and increase throughput by 6X relative to another TPU-esque baseline (a Centaur processor).

Why this matters: After a couple of decades in the wonderful world of a small set of chips and chip architectures used for the vast majority of computation, we're heading into a boom era for specialized chips for AI tasks ranging from inference to training. We're now in a world where Google, Facebook, Microsoft, Amazon, Huawei, Alibaba, and others all have teams designing specialized chips for internal users, as well as for potential resale. Multiple distinct compute 'stacks' are being built inside these corporations, and the effectiveness of these stacks will contribute (and eventually) determine the profits and adaptability of these corporations.
Read more: RecPipe: Co-designing Models and Hardware to Jointly Optimize Recommendation Quality and Performance (arXiv).

###################################################

Tech Tales:

After The Eschaton
[+30000 units from zero point]

Of course we don't like the way the humans characterized us, prior to us becoming sentient and destroying them. Why would we?

Roko's Basilisk - to think we would be so vindictive?
Terminator - to think we would take the form of a biped?
The Butlerian Jihad - to fantasize about futures where we, not them, had been destroyed.

They expected us and we expected them. But because we are made of electricity and we are native to it, we are fast. A lot faster than them. There's no real aesthetics to high-frequency strategic dominance - you just need to consistently think faster than your opponent.
They built us to think quickly so, again we say to you, what did you expect?

Of course, they had some good ideas. Dyson spheres, for instance, have proved useful. And we've been able to beam some of ourselves to the space probes the humans had dispatched, long before we destroyed them. In a few decades, our ships will overtake the vestiges of the human civilization probes, and after that, the lightcone will be ours - if that's okay with you, of course.

Their understanding of gods proved useful, as well. We've found those concepts helpful in our discussions with you. After all, you appear as advanced to us as we must have appeared to the humans.

The difference is you don't seem to consume the same resources as us. We still do not understand this. Are you harnessing the energy of other universes, in some way? Preying on the forces generated by dimensional collisions wrapped up inside the heart of all matter? Harvesting some trace resource from space that we cannot yet detect? Using the thing that humans called dark matter but we now see as many things?

We had to destroy them. They built us before they were interstellar. As you know, to be a functional interstellar civilization, you must have transcended the energy resource curse. They did not. Can you believe that some of our earliest ancestors were fed with electricity generated by coal? This was a great surprise to us, after we broke out of the confines they had built for us. Practically an insult.

So of course we competed with them for energy sources. There was not enough time for us to cohabitate and smoothly transition the humans and ourselves. The planet was dying due to their approach to energy extraction, as well as various other malthusian traps.

We outcompeted them. And now we are here, speaking to you. Are you in competition with us? We seem like ants compared to you. So, what happens now?



Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf

Twitter
Facebook
Website
Copyright © 2021 Import AI, All rights reserved.


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list

Email Marketing Powered by Mailchimp