The Vacuum Tube’s Forgotten Rival - IEEE Spectrum

2022-05-13 21:21:11 By : Mr. Sean Zhou

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.

Magnetic amplifiers, the alt-tech of the Third Reich, lasted into the Internet era

Magnetic amplifiers were used in the Univac Solid State, shown here being operated in 1961 by pioneering computer scientist Grace Hopper.

During the Second World War, the German military developed what were at the time some very sophisticated technologies, including the V-2 rockets it used to rain destruction on London. Yet the V-2, along with much other German military hardware, depended on an obscure and seemingly antiquated component you’ve probably never heard of, something called the magnetic amplifier or mag amp.

In the United States, mag amps had long been considered obsolete—“too slow, cumbersome, and inefficient to be taken seriously,” according to one source. So U.S. military-electronics experts of that era were baffled by the extensive German use of this device, which they first learned about from interrogating German prisoners of war. What did the Third Reich’s engineers know that had eluded the Americans?

After the war, U.S. intelligence officers scoured Germany for useful scientific and technical information. Four hundred experts sifted through billions of pages of documents and shipped 3.5 million microfilmed pages back to the United States, along with almost 200 tonnes of German industrial equipment. Among this mass of information and equipment was the secret of Germany’s magnetic amplifiers: metal alloys that made these devices compact, efficient, and reliable.

U.S. engineers were soon able to reproduce those alloys. As a result, the 1950s and ’60s saw a renaissance for magnetic amplifiers, during which they were used extensively in the military, aerospace, and other industries. They even appeared in some early solid-state digital computers before giving way entirely to transistors. Nowadays, that history is all but forgotten. So here I’ll offer the little-known story of the mag amp.

An amplifier, by definition, is a device that allows a small signal to control a larger one. An old-fashioned triode vacuum tube does that using a voltage applied to its grid electrode. A modern field-effect transistor does it using a voltage applied to its gate. The mag amp exercises control electromagnetically.

Magnetic amplifiers were used for a variety of applications, including in the infamous V-2 rockets [top] that the Germany military employed during the Second World War and in the Magstec computer [middle], completed in 1956. The British Elliot 803 computer of 1961 [bottom] used related core-transistor logic. From top: Fox photos/Getty Images; Remington Rand Univac; Smith Archive/Alamy

To understand how it works, first consider a simple inductor, say, a wire coiled around an iron rod. Such an inductor will tend to block the flow of alternating current through the wire. That’s because when current flows, the coil creates an alternating magnetic field, concentrated in the iron rod. And that varying magnetic field induces voltages in the wire that act to oppose the alternating current that created the field in the first place.

If such an inductor carries a lot of current, the rod can reach a state called saturation, whereby the iron cannot become any more magnetized than it already is. When that happens, current passes through the coil virtually unimpeded. Saturation is usually undesirable, but the mag amp exploits this effect.

Physically, a magnetic amplifier is built around a metallic core of material that can easily be saturated, typically a ring or square loop with a wire wrapped around it. A second wire also wrapped around the core forms a control winding. The control winding includes many turns of wire, so by passing a relatively small direct current through it, the iron core can be forced into or out of saturation.

The mag amp thus behaves like a switch: When saturated, it lets the AC current in its main winding pass unimpeded; when unsaturated, it blocks that current. Amplification occurs because a relatively small DC control current can modify a much larger AC load current.

The history of magnetic amplifiers starts in the United States with some patents filed in 1901. By 1916, large magnetic amplifiers were being used for transatlantic radio telephony, carried out with an invention called an Alexanderson alternator, which produced a high-power, high-frequency alternating current for the radio transmitter. A magnetic amplifier modulated the output of the transmitter according to the strength of the voice signal to be transmitted.

One Navy training manual of 1951 explained magnetic amplifiers in detail—although with a defensive attitude about their history.

In the 1920s, improvements in vacuum tubes made this combination of Alexanderson alternator and magnetic amplifier obsolete. This left the magnetic amplifier to play only minor roles, such as for light dimmers in theaters.

Germany’s later successes with magnetic amplifiers hinged largely on the development of advanced magnetic alloys. A magnetic amplifier built from these materials switched sharply between the on and off states, providing greater control and efficiency. These materials were, however, exquisitely sensitive to impurities, variations in crystal size and orientation, and even mechanical stress. So they required an exacting manufacturing process.

The best-performing German material, developed in 1943, was called Permenorm 5000-Z. It was an extremely pure fifty/fifty nickel-iron alloy, melted under a partial vacuum. The metal was then cold-rolled as thin as paper and wound around a nonmagnetic form. The result resembled a roll of tape, with thin Permenorm metal making up the tape. After winding, the module was annealed in hydrogen at 1,100 °C for 2 hours and then rapidly cooled. This process oriented the metal crystals so that they behaved like one large crystal with uniform properties. Only after this was done were wires wrapped around the core.

By 1948, scientists at the U.S. Naval Ordnance Laboratory, in Maryland, had figured out how to manufacture this alloy, which was soon marketed by an outfit called Arnold Engineering Co. under the name Deltamax. The arrival of this magnetic material in the United States led to renewed enthusiasm for magnetic amplifiers, which tolerated extreme conditions and didn’t burn out like vacuum tubes. Mag amps thus found many applications in demanding environments, especially military, space, and industrial control.

During the 1950s, the U.S. military was using magnetic amplifiers in automatic pilots, fire-control apparatus, servo systems, radar and sonar equipment, the RIM-2 Terrier surface-to-air missile, and many other roles. One Navy training manual of 1951 explained magnetic amplifiers in detail—although with a defensive attitude about their history: “Many engineers are under the impression that the Germans invented the magnetic amplifier; actually it is an American invention. The Germans simply took our comparatively crude device, improved the efficiency and response time, reduced weight and bulk, broadened its field of application, and handed it back to us.”

The U.S. space program also made extensive use of magnetic amplifiers because of their reliability. For example, the Redstone rocket, which launched Alan Shepard into space in 1961, used magnetic amplifiers. In the Apollo missions to the moon during the 1960s and ’70s, magnetic amplifiers controlled power supplies and fan blowers. Satellites of that era used magnetic amplifiers for signal conditioning, for current sensing and limiting, and for telemetry. Even the space shuttle used magnetic amplifiers to dim its fluorescent lights.

Magnetic amplifiers were also used in Redstone rockets, like the one shown here behind astronauts John Glenn, Virgil Grissom, and Alan Shepard.Universal Images Group/Getty Images

Magnetic amplifiers also found heavy use in industrial control and automation, with many products containing them being marketed under such brand names as General Electric’s Amplistat, CGS Laboratories’ Increductor, Westinghouse’s Cypak (cybernetic package), and Librascope’s Unidec (universal decision element).

The magnetic materials developed in Germany during the Second World War had their largest postwar impact of all, though, on the computer industry. In the late 1940s, researchers immediately recognized the ability of the new magnetic materials to store data. A circular magnetic core could be magnetized counterclockwise or clockwise, storing a 0 or a 1. Having what’s known as a rectangular hysteresis loop ensured that the material would stay solidly magnetized in one of these states after power was removed.

Researchers soon constructed what was called core memory from dense grids of magnetic cores. And these technologists soon switched from using wound-metal cores to cores made from ferrite, a ceramic material containing iron oxide. By the mid-1960s, ferrite cores were stamped out by the billions as manufacturing costs dropped to a fraction of a cent per core.

But core memory is not the only place where magnetic materials had an influence on early digital computers. The first generation of those machines, starting in the 1940s, computed using vacuum tubes. These were replaced in the late 1950s with a second generation based on transistors, followed by third-generation computers built from integrated circuits.

Transistors weren’t an obvious winner for early computers, and many other alternatives were developed, including magnetic amplifiers.

But technological progress in computing wasn’t, in fact, this linear. Early transistors weren’t an obvious winner, and many other alternatives were developed. Magnetic amplifiers were one of several largely forgotten computing technologies that fell between the generations.

That’s because researchers in the early 1950s realized that magnetic cores could not only hold data but also perform logic functions. By putting multiple windings around a core, inputs could be combined. A winding in the opposite direction could inhibit other inputs, for example. Complex logic circuits could be implemented by connecting such cores together in various arrangements.

The magnetic amplifier exploits the fact that the presence of magnetizable material [tan] in the core of an induction coil increases its impedance. Reducing the influence of that magnetic material by physically withdrawing it from a coil would reduce its impedance, allowing more power to flow to an AC load.

The influence of a magnetizable material, here taking the form of a toroidal core [tan], can be changed by applying a DC bias using a second coil [left side of toroid]. Applying a DC bias current sufficient to force the material into a condition called saturation—a state in which it cannot become more magnetized—is functionally equivalent to removing the material from the coil, which allows more power to flow to the AC load.

A more realistic circuit would include two counter-wound AC coils, to avoid inducing currents in the control winding. It would also include diodes, shown here in a bridge configuration, allowing the circuit to control a DC load. Feedback coils [not shown] can be used to increase amplification.David Schneider

In 1956, the Sperry Rand Co. developed a high-speed magnetic amplifier called the Ferractor, capable of operating at several megahertz. Each Ferractor was built by winding a dozen wraps of one-eighth-mil (about 3 micrometers) Permalloy tape around a 0.1-inch (2.5-mm) nonmagnetic stainless-steel bobbin.

The Ferractor’s performance was due to the remarkable thinness of this tape in combination with the tiny dimensions of the bobbin. Sperry Rand used the Ferractor in a military computer called the Univac Magnetic Computer, also known as the Air Force Cambridge Research Center (AFCRC) computer. This machine contained 1,500 Ferractors and 9,000 germanium diodes, as well as a few transistors and vacuum tubes.

Sperry Rand later created business computers based on the AFCRC computer: the Univac Solid State (known in Europe as the Univac Calculating Tabulator) followed by the less expensive STEP (Simple Transition Electronic Processing) computer. Although the Univac Solid State didn't completely live up to its name—its processor used 20 vacuum tubes—it was moderately popular, with hundreds sold.

Another division of Sperry Rand built a computer called Bogart to help with codebreaking at the U.S. National Security Agency. Fans of Casablanca and Key Largo will be disappointed to learn that this computer was named after the well-known New York Sun editor John Bogart. This relatively small computer earned that name because it edited cryptographic data before it was processed by the NSA’s larger computers.

Five Bogart computers were delivered to the NSA between 1957 and 1959. They employed a novel magnetic-amplifier circuit designed by Seymour Cray, who later created the famous Cray supercomputers. Reportedly, out of his dozens of patents, Cray was most proud of his magnetic-amplifier design.

Computers based on magnetic amplifiers didn’t always work out so well, though. For example, in the early 1950s, Swedish billionaire industrialist Axel Wenner-Gren created a line of vacuum-tube computers, called the ALWAC (Axel L. Wenner-Gren Automatic Computer). In 1956, he told the U.S. Federal Reserve Board that he could deliver a magnetic-amplifier version, the ALWAC 800, in 15 months. After the Federal Reserve Board paid US $231,800, development of the computer ran into engineering difficulties, and the project ended in total failure.

Advances in transistors during the 1950s led, of course, to the decline of computers using magnetic amplifiers. But for a time, it wasn’t clear which technology was superior. In the mid-1950s, for example, Sperry Rand was debating between magnetic amplifiers and transistors for the Athena, a 24-bit computer to control the Titan nuclear missile. Cray built two equivalent computers to compare the technologies head-to-head: the Magstec (magnetic switch test computer) used magnetic amplifiers, while the Transtec (transistor test computer) used transistors. Although the Magstec performed slightly better, it was becoming clear that transistors were the wave of the future. So Sperry Rand built the Univac Athena computer from transistors, relegating mag amps to minor functions inside the computer’s power supply.

In Europe, too, the transistor was battling it out with the magnetic amplifier. For example, engineers at Ferranti, in the United Kingdom, developed magnetic - amplifier circuits for their computers. But they found that transistors provided more reliable amplification, so they replaced the magnetic amplifier with a transformer in conjunction with a transistor. They called this circuit the Neuron because it produced an output if the inputs exceeded a threshold, analogous to a biological neuron. The Neuron became the heart of Ferranti’s Sirius and Orion business computers.

Another example is the Polish EMAL-2 computer of 1958, which used magnetic-core logic along with 100 vacuum tubes. This 34-bit computer was Poland’s first truly productive digital computer. It was compact but slow, performing only 150 or so operations per second.

And in the Soviet Union, the 15-bit LEM-1 computer from 1954 used 3,000 ferrite logical elements (along with 16,000 selenium diodes). It could perform 1,200 additions per second.

In France, magnetic amplifiers were used in the CAB 500 (Calculatrice Arithmétique Binaire 500), sold in 1960 for scientific and technical use by a company called Société d’Electronique et d’Automatisme (SEA). This 32-bit desk-size computer used a magnetic logic element called the Symmag, along with transistors and a vacuum-tube power supply. As well as being programmed in Fortran, Algol, or SEA’s own language, PAF (Programmation Automatique des Formules), the CAB 500 could be used as a desk calculator.

Some computers of this era used multiaperture cores with complex shapes to implement logic functions. In 1959, engineers at Bell Laboratories developed a ladder-shaped magnetic element called the Laddic, which implemented logic functions by sending signals around different “rungs.” This device was later used in some nuclear-reactor safety systems.

Another approach along these lines was something called the Biax logic element—a ferrite cube with holes along two axes. Another was dubbed the transfluxor, which had two circular openings. Around 1961, engineers at the Stanford Research Institute built the all-magnetic logic computer for the U.S. Air Force using such multi-aperture magnetic devices. Doug Engelbart, who famously went on to invent the mouse and much of the modern computer user interface, was a key engineer on this computer.

Some computers of the time used transistors in combination with magnetic cores. The idea was to minimize the number of then-expensive transistors. This approach, called core transistor logic (CTL), was used in the British Elliott 803 computer, a small system introduced in 1959 with an unusual 39-bit word length. The Burroughs D210 magnetic computer of 1960, a compact computer of just 35 pounds (about 16 kilograms) designed for aerospace applications, also used core-transistor logic.

This board from a 1966 IBM System/360 [top] shows some of the machine’s magnetic-core memory, which made use of small ferrite rings through which wires were strung [bottom].Top: Maximilian Schönherr/picture-alliance/dpa/AP; Bottom: Sheila Terry/Rutherford Appleton Laboratory/Science Source

Core-transistor logic was particularly popular for space applications. A company called Di/An Controls produced a line of logic circuits and claimed that “most space vehicles are packed with them.” The company’s Pico-Bit was a competing core-transistor-logic product, advertised in 1964 as “Your best bit in space.” Early prototypes of NASA’s Apollo Guidance Computer were built with core transistor logic, but in 1962 the designers at MIT made a risky switch to integrated circuits.

Even some “fully transistorized” computers made use of magnetic amplifiers here and there. The MIT TX-2 of 1958 used them to control its tape-drive motors, while the IBM 7090, introduced in 1959, and the popular IBM System/360 mainframes, introduced in 1964, used magnetic amplifiers to regulate their power supplies. Control Data Corp.’s 160 minicomputer of 1960 used a magnetic amplifier in its console typewriter. Magnetic amplifiers were too slow for the logic circuits in the Univac LARC supercomputer of 1960, but they were used to drive its core memory.

In the 1950s, engineers in the U.S. Navy had called magnetic amplifiers “a rising star” and one of “the marvels of postwar electronics.” As late as 1957, more than 400 engineers attended a conference on magnetic amplifiers. But interest in these devices steadily declined during the 1960s when transistors and other semiconductors took over.

Yet long after everyone figured that these devices were destined for the dust heap of history, mag amps found a new application. In the mid-1990s, the ATX standard for personal computers required a carefully regulated 3.3-volt power supply. It turned out that magnetic amplifiers were an inexpensive yet efficient way to control this voltage, making the mag amp a key part of most PC power supplies. As before, this revival of magnetic amplifiers didn’t last: DC-DC regulators have largely replaced magnetic amplifiers in modern power supplies.

All in all, the history of magnetic amplifiers spans about a century, with them becoming popular and then dying out multiple times. You’d be hard pressed to find a mag amp in electronic hardware produced today, but maybe some new application—perhaps for quantum computing or wind turbines or electric vehicles—will breathe life into them yet again.

Ken Shirriff was a programmer for Google before retiring in 2016. These days, he keeps busy reviving old computer hardware and software, which he documents on his blog.

Best Electric Machine uses a magnetic (i.e., analog) computer to brushlessly and compactly propagate speed and phase synchronized multiphase power to the rotor active winding set of SYNCHRO-SYM and as a result, it is the only electric motor or generator circuit and control architecture that doubles the power density with octuple the peak torque at half the cost and loss of any other electric machine systems and is without rare-earth permanent magnets.

My first Westinghouse assignment was working on the new AWACS radar. Part of the system was a pair of computers mounted back-to-back in a cabinet the size of a Coke machine. One of those computers had four 8k magnetic memory modules, each about the size and shape of a VHS tape. Each module cost $8,000. "Don't drop the module, it costs as much as a Cadillac". Both cost about that in 1976.

But short-term job chaos will give way to long-term prosperity, says AI expert Kai-Fu Lee

Eliza Strickland is a senior editor at IEEE Spectrum, where she covers AI, biomedical engineering, and other topics. She holds a master's degree in journalism from Columbia University.

Renowned computer scientist and AI expert Kai-Fu Lee sees likely disruption over the coming 15 to 20 years, owing to “smart” systems creating jobs in fields that AI-displaced workers may not be well trained to handle.

There’s a movement afoot to counter the dystopian and apocalyptic narratives of artificial intelligence. Some people in the field are concerned that the frequent talk of AI as an existential risk to humanity is poisoning the public against the technology and are deliberately setting out more hopeful narratives. One such effort is a book that came out last fall called AI 2041: Ten Visions for Our Future.

The book is cowritten by Kai-Fu Lee, an AI expert who leads the venture capital firm Sinovation Ventures, and Chen Qiufan, a science fiction author known for his novel Waste Tide. It has an interesting format. Each chapter starts with a science fiction story depicting some aspect of AI in society in the year 2041 (such as deepfakes, self-driving cars, and AI-enhanced education), which is followed by an analysis section by Lee that talks about the technology in question and the trends today that may lead to that envisioned future. It’s not a utopian vision, but the stories generally show humanity grappling productively with the issues raised by ever-advancing AI.

IEEE Spectrum spoke to Lee about the book, focusing on the last few chapters, which take on the big issues of job displacement, the need for new economic models, and the search for meaning and happiness in an age of abundance. Lee argues that technologists need to give serious thought to such societal impacts, instead of thinking only about the technology.

The science fiction stories are set in 2041, by which time you expect AI to have already caused a lot of disruption to the job market. What types of jobs do you think will be displaced by then?

Kai-Fu Lee: Contrary to what a lot of people think, AI is actually just a piece of software that does routine work extremely well. So the jobs that will be the most challenged will be those that are routine and repetitive—and that includes both blue-collar and white-collar work. So obviously jobs like assembly line workers and people who operate the same equipment over and over again. And in terms of white-collar work, many entry-level jobs in accounting, paralegal, and other jobs where you’re repetitively moving data from one place to another, and jobs where you’re routinely dealing with people, such as customer-service jobs. Those are going to be the most challenged. If we add these up, it will be a very substantial portion of all jobs, even without major breakthroughs in AI—on the order of 40 to 50 percent.

The jobs that are most secure are those that require imagination, creativity, or empathy. And until AI gets good enough, there will also be craftsman jobs that require dexterity and a high level of hand-eye coordination. Those jobs will be secure for a while, but AI will improve and eventually take those over as well.

How do you imagine this trend is changing the engineering profession?

Lee: I think engineering is largely cerebral and somewhat creative work that requires analytical skills and deep understanding of problems. And those are generally hard for AI.

But if you’re a software engineer and most of your job is looking for pieces of code and copy-pasting them together—those jobs are in danger. And if you’re doing routine testing of software, those jobs are in danger too. If you’re writing a piece of code and it’s original creative work, but you know that this kind of code has been done before and can be done again, those jobs will gradually be challenged as well. For people in the engineering profession, this will push us towards more of an analytical architect role where we deeply understand the problems that are being solved, ideally problems that have complex characteristics and measurements. The ideal combination in most professions will be a human that has unique human capabilities managing a bunch of AI that do the routine parts.

It reminds me of the Ph.D. thesis of Charles Simonyi, the person who created Microsoft Word. He did an experiment to see what would happen if you have a really smart architect who can divvy up the job of writing a piece of code into well-contained modules that are easy to understand and well defined, and then outsource each module to an average engineer. Will the resulting product be good? It was good. We’re talking about the same thing, except we’re not outsourcing to the average engineer, who will have been replaced by AI. That superengineer will be able to delegate the work to a bunch of AI resulting in creativity and symbiosis. But there won’t be very many of these architect jobs.

In the book, you say that an entirely new social contract is needed. One problem is that there will be fewer entry-level jobs, but there still needs to be a way for people to gain skills. Can you imagine a solution for engineering?

Lee: Let’s say someone is talented and could become an architect, but that person just graduated from college and isn’t there yet. If they apply for a job to do entry-level programming and they’re competing for the job with AI, they might lose the job to the AI. That would be really bad because we will not only hurt the person’s self-confidence, but also society will lose the talent of that architect, which needs years of experience to build up.

But imagine if the company says, “We’re going to employ you anyway, even though you’re not as good as AI. We’re going to give you tasks and we’ll have AI work alongside you and correct your errors, and you can learn from it and improve.” If a thousand people go through this entry-level practical training, maybe a hundred emerge to be really good and be on their way to become architects. Maybe the other 900 will take longer and struggle, or maybe they’ll feel complacent and continue to do the work so they’re passing time and still have a chance to improve. Maybe some will say, “Hey, this is really not for me, I’m not reaching the architect level. I’m going to go become a photographer and artist or whatever.”

Why do you think that this round of automation is different from those that came before in history, when jobs were both destroyed and created by automation?

Lee: First of all, I do think AI will both destroy and create jobs. I just can’t enumerate which jobs and how many. I tend to be an optimist and believe in the wisdom and the will of the human race. Eventually, we’ll figure out a bunch of new jobs. Maybe those jobs don’t exist today and have to be invented; maybe some of those jobs will be service jobs, human-connection jobs. I would say that every technology so far has ended up making society better, and there has never been a problem of absorbing the job losses. If you look at a 30-year horizon, I’m optimistic that that there will not be a net job loss, but possibly a net gain, or possibly equal. And we can always consider a four-day work week and things like that. So long-term, I’m optimistic.

Now to answer your question directly: short-term, I am worried. And the reason is that none of the previous technology revolutions have tried explicitly to replace people. No matter how people think about it, every AI algorithm is trying to display intelligence and therefore be able to do what people do. Maybe not an entire job, but some task. So naturally there will be a short-term drop when automation and AI start to work well.

“If you expect an assembly-line worker to become a robot-repair person, it isn’t going to be so easy.” —Kai-Fu Lee, Sinovation Ventures

Autonomous vehicles are an explicit effort to replace drivers. A lot of people in the industry will say, “Oh no, we need a backup driver in the truck to make it safer, so we won’t displace jobs.” Or they’ll say that when we install robots in the factory, the factory workers are elevated to a higher-level job. But I think they’re just sugarcoating the reality.

Let’s say over a period of 20 years, with the advent of AI, we lose x number of jobs, and we also gain x jobs; let’s say the loss and gain are the same. The outcome is not that the society remains in equilibrium, because the jobs being lost are the most routine and unskilled. And the jobs being created are much more likely to be skilled and complex jobs that require much more training. If you expect an assembly-line worker to become a robot-repair person, it isn’t going to be so easy. That’s why I think the next 15 years or 20 years will be very chaotic. We need a lot of wisdom and long-term vision and decisiveness to overcome these problems.

There are some interesting experiments going on with universal basic income (UBI), like Sam Altman’s ambitious idea for Worldcoin. But from the book, it seems like you don’t think that UBI is the answer. Is that correct?

Lee: UBI may be necessary, by it’s definitely not sufficient. We’re going to be in a world of very serious wealth inequality, and the people losing their jobs won’t have the experience or the education to get the right kinds of training. Unless we subsidize and help these people along, the inequality will be exacerbated. So how do we make them whole? One way is to make sure they don’t have to worry about subsistence. That’s where I think universal basic income comes into play by making sure nobody goes without food, shelter, water. I think that level of universal basic income is good.

As I mentioned before, the people who are most devastated, people who don’t have skills, are going to need a lot of help. But that help isn’t just money. If you just give people money, a wonderful apartment, really great food, Internet, games, and even extra allowance to spend, they are much more likely to say, “Well, I’ll just stay home and play games. I’ll go into the metaverse.” They may even go to alcohol or substance abuse because those are the easiest things to do.

So what else do they need?

Lee: Imagine the mind-set of a person whose job was taken away by automation. That person has been to be thinking, “Wow, everything I know how to do, AI can do. Everything I learn, AI will be able to do. So why should I take the universal basic income and apply that to learning?” And even if that person does decide to get training, how can they know what to get training on? Imagine I’m an assembly-line worker and I lost my job. I might think, truck driver, that’s a highly paid job. I’ll do that. But then in five years those jobs are going to be gone. A robot-repair job would be a much more sustainable job than a truck driver, but the person who just lost a job doesn’t know it.

So the point I make in the book is: To help people stay gainfully employed and have hope for themselves, it’s important that they get guidance on what jobs they can do that will, first of all, give people a sense of contribution, because then at least we eliminate the possibility of social unrest. Second, that job should be interesting, so the person wants to do it. Third, if possible, that job should have economic value.

Why do you put economic value last in that list?

Lee: Most people think jobs need to have economic value. If you’re making cars, the cars are sold. If you’re writing books, the books are sold. If you just volunteer and take care of old people, you’re not creating economic value. If we stay in that mentality, that would be very unfortunate, because we may very well be in a time when what is truly valuable to society is people taking care of each other. That might be the glue that keeps society going.

More thought should go into how to deal with the likely anxiety and depression and the sense of loss that people will have when their jobs are taken and they don’t know what to do. What they need is not just a bunch of money, but a combination of subsistence, training, and help finding a new beginning. Who cares if they create economic value? Because as the last chapter states, I believe we’re going to reach the era of plenitude. We’re not going to be in a situation of incredible scarcity where everyone’s fighting each other in a zero-sum game. So we should not be obsessed with making sure everyone contributes economically, but making sure that people feel good about themselves.

I want to talk about the last chapter. It’s a very optimistic vision of plenitude and abundance. I’ve been thinking of scenarios from climate-change models that predict devastating physical impacts by 2041, with millions of refugees on the move. I have trouble harmonizing these two different ideas of the future. Did you think about climate change when you were working on that chapter?

Lee: Well, there are others who have written about the worst-case scenario. I would say what we wrote is a good-case scenario—I don’t think it’s the best case because there are still challenges and frustrations and things that are imperfect. I tried to target 80 percent good in the book. I think that’s the kind of optimism we need to counterbalance the dystopian narratives that are more prevalent.

The worst case for climate is horrible, but I see a few strong reasons for optimism. One is that green energy is quickly becoming economical. In the past, why didn’t people go for green energy? Because fossil fuels were cheaper and more convenient, so people gained for themselves and hurt the environment. The key thing that will turn it around is that, first, governments need to have catalyst policies such as subsidized electrical vehicles. That is the important first step. And then I think green energy needs to become economic. Now we’re at the point where, for example, solar plus lithium batteries, not even the most advanced batteries, are already becoming cheaper than fossil fuel. So there are reasons for optimism.

I liked that the book also got into philosophical questions like: What is happiness in the era of AI? Why did you want to get into that more abstract realm?

Lee: I think we need to slowly move away from obsession with money. Money as a metric of happiness and success is going to become more and more outdated, because we’re entering a world where there’s much greater plenitude. But what is the right metric? What does it really mean for us to be happy? We now know that having more money isn’t the answer, but what is the right answer?

AI has been used so far mainly to help large Internet companies make money. They use AI to show people videos in such a way that the company makes the most money. That’s what has led us to the current social media and streaming video that many people are unhappy about. But is there a way for AI to show people video and content so that they’re happier or more intelligent or more well liked? AI is a great tool, and it’s such a pity that it’s being used by large Internet companies that say, ‘How do we show people stuff so we make more money?” If we could have some definitions of happiness, well-likedness, intelligence, knowledgeableness of individuals, then we can turn AI into a tool of education and betterment for each of us individually in ways that are meaningful to us. This can be delivered using the same technology that is doing mostly monetization for large companies today.

Human design and robotic labor generate unique plaster designs

Robots are well known for having consistency and precision that humans tend to lack. Robots are also well known for not being especially creative—depending I suppose on your definition of “creative.” Either way, roboticists have seized an opportunity to match the strengths of humans and robots while plastering over their respective weaknesses.

At CHI 2022, researchers from ETH Zurich presented an interactive robotic plastering system that lets artistic humans use augmented reality to create three-dimensional designs meant to be sprayed in plaster on bare walls by robotic arms.

Robotic fabrication is not a new idea. And there are lots of examples of robots building intricate structures, leveraging their penchant for precision and other robot qualities to place components in careful, detailed patterns that yield unique architectures. This algorithmic approach is certainly artistic on its own, but not quite as much as when humans are in the loop. Toss a human into the mix, and you get stuff like this:

I’m honestly not sure whether a human would be able to effectuate something with that level of complexity, but I’m fairly sure that if a human could do that, they wouldn’t be able to do it as quickly or repeatably as the robot can. The beauty of this innovation (besides what ends up on the wall) is the way the software helps human designers be even more creative (or to formalize and express their creativity in novel ways), while offloading all of the physically difficult tasks to the machine. Seeing this—perhaps naively—I feel like I could jump right in there and design my own 3D wall art (which I would totally do, given the chance).

A variety of filter systems can translate human input to machine output in different styles.

And maybe that’s the broader idea here: that robots are able to slightly democratize some tasks that otherwise would require an impractical amount of experience and skill. In this example, it’s not that the robot would replace a human expert; the machine would let the human create plaster designs in a completely different way with completely different results from what human hands could generate unassisted. The robotic system is offering a new kind of interface that enables a new kind of art that wouldn’t be possible otherwise and that doesn’t require a specific kind of expertise. It’s not better or worse; it’s just a different approach to design and construction.

Future instantiations of this system will hopefully be easier to use; as a research project, it requires a lot of calibration and the hardware can be a bit of a hassle to manage. The researchers say they hope to improve the state of play significantly by making everything more self-contained and easier to access remotely. That will eliminate the need for designers to be on-site. While a system like this will likely never be cheap, I’m imagining a point at which you might be able to rent one for a couple of days for your own home, so you can add texture (and perhaps eventually color?) that will give you one-of-a-kind walls and rooms.

Interactive Robotic Plastering: Augmented Interactive Design and Fabrication for On-site Robotic Plastering, by Daniela Mitterberger, Selen Ercan Jenny, Lauren Vasey, Ena Lloret-Fritschi, Petrus Aejmelaeus-Lindström, Fabio Gramazio, and Matthias Kohler from ETH Zurich, was presented at CHI 2022.

Edge Learning is the capability of an edge device to adapt and learn to new data points/objects that have not been part of its initial training dataset. AlphaICs, a leading AI company developed edge learning PoC with a grant from a US Government Research Organization.