The Robots are Coming

A casual discussion about the automation of work and the decreasing relevance of humans to productivity.

Robot Sex Dolls

February 10, 2016

So, Real Doll has been around for years—I remember them being a blocked site on my company’s computer network back in 2000 or so, The company makes life-size sex dolls that sell for thousands of dollars.

But the founder wants to branch out:

We’re pushing into using this newer technology that’s emerging in terms of robotics and artificial intelligence, and what we’re trying to do is create an artificial intelligence that is user-customizable the same way that the dolls are customizable. So you would pick fundamental personality traits that appeal to you, be it shy, outgoing, and once it is then formed into a profile, you would interact with the AI and it would learn from you these key facts about you as the relationship, as it were, progresses. That kind of creates this very simplistic feeling of: someone cares about me.

So, he wants to embed the dolls with AI and animatronic features that let them substitute for a real human relationship.

I don’t think that a doll can replace a human interaction, with AI or without, but I do think there are probably cases of people who either by choice or by circumstance cannot have a real relationship, and so in that case, maybe the doll is going to be that replacement. But I don’t think there’s necessarily anything wrong with that.

Here’s the entire video (NSFW warning: naked dolls...)

Permalink


Buzzfeed Talks to Uber and Lyft Drivers

February 10, 2016

Buzzfeed made a video where they took rights with Uber and Lyft drives and talked to the about driverless cars and what it might mean for their jobs, They don’t seem that terribly concerned about it.

There’s a quote from Lyft in the the middle of it where Buzzfeed straight-up asked them what would happen to their drivers, They talk around the subject quite a bit, but they basically say, “This is the future. We don’t know what’s going to happen to our drivers...”

Permalink


NASA Investigates Single Pilot Operations in Commercial Aircraft

February 08, 2016

NASA is investigating the idea of aircraft with a single pilot ("Single Pilot Operations,” or SPO), rather than the two in use today. The key is that a second officer is present as well—on the ground, The ground officer is monitoring 12 planes at once.

[...] a specialized two-position ground control station where the operator when sitting in the right seat fills the role of “super dispatcher” for as many as 12 single-pilot airliners in cruise flight. If one of the 12 aircraft enters an “off-nominal” state due to an issue or anomaly, the ground station operator moves to the left seat and becomes a ground-based first officer dedicated to that aircraft.

If there’s a really big problem, the ground officer can take control of the plane remotely.

In a contingency, which has to be triggered by the captain, the super dispatcher transitions into dedicated support mode as a first officer in the left seat of the ground station; the pilot and first officer then conduct a briefing over an open microphone loop to assign duties, including who will fly the aircraft (the first officer flies via inputs to the autoflight system in the mode control panel representation in the ground control station).

Permalink


Will the machines ever wake up?

February 06, 2016

Could we ever replicate consciousness in machines? Might they ever become sentient like Skynet in the Terminator films? TechCrunch looks at the possibilities:

One of the most common contentions as to why conscious will eventually be replicated is based on the fact that nature bumbled its way to human-level conscious experience, and with a deeper understanding of the neurological and computational underpinnings of what is “happening” to create a conscious experience, we should be able to do the same.

The author asked 30+ researchers (the bulk of the article is a long infographic with their responses).

Though some researchers supposed a longer time frame, and some a shorter time frame, the bulk of the responses (totaling nearly 50 percent of the respondents who were comfortable making a prediction) were in the 2021-2060 time frame.

That’s just five years away, And the implications are huge:

If a machine became conscious enough to feel, even at the level of a dog or squirrel, should we not have laws to protect them from types of abuse or neglect?

If machines were in fact able to consciously “feel” physical or emotional sensations, would we be obligated to program them to only experience happiness and bliss?

I would argue that if you have to “program” emotions at all, then they’re not really conscious.

Permalink


Y Combinator Basic Income Study

February 06, 2016

Y Combinator—the startup incubator—is about to start up a study on the feasability of basic income.

We’re going to try something new—our first Request For Research. We’d like to fund a study on basic income—i.e., giving people enough money to live on with no strings attached, I’ve been intrigued by the idea for a while, and although there’s been a lot of discussion, there’s fairly little data about how it would work.

They’re looking for someone to do some long term research on it.

We’re looking for one researcher who wants to work full-time on this project for 5 years as part of YC Research, We’d like someone with some experience doing this kind of research, but as always we’re more interested in someone’s potential than his or her past, Our idea is to give a basic income to a group of people in the US for a 5 year period, though we’re flexible on that and all aspects of the project—we are far from experts on this kind of research.

Permalink


Disembodied Objects of Speed and Efficiency

February 05, 2016

In a discussion about automated business processing in Mindless: Why Smarter Machines are Making Dumber Humans.

It is here perhaps that IBM gets us closest to a digital version of Aldous Huxley’s Brave New World and where, whether we are physicians, fast food workers, middle managers, or Walmart associates, we have become disembodied objects of speed and efficiency joined to these electronic symbols on the screen—symbols that the “process assemblers” then move around as they see fit and with the real, corporeal us having to following orders like members of a digital chain gang, pushed first one way and then another by our virtual overseers.

Permalink


The Crash of Air France Flight 447

February 05, 2016

This Vanity Fair article examines the 2009 crash of Air France flight 447, which went down in the Atlantic en route from Brazil to France. The ostensible reason for the crash was a stall caused by faulty air speed reporting, due to an iced-over air speed indicator.

However, the more subtle, sinister cause of the crash might be that the pilots just weren’t prepared for anything to go wrong, A pilot should be able to recover from a stall, yet these pilots could not.

To put it briefly, automation has made it more and more unlikely that ordinary airline pilots will ever have to face a raw crisis in flight—but also more and more unlikely that they will be able to cope with such a crisis if one arises.

Clearly, when the technology failed, human error became a factor:

The solution was simple, and fundamental to flying. All Bonin had to do was to lower the nose to a normal cruising pitch—about to the horizon—and leave the thrust alone. The airplane would have returned to cruising flight at the same speed as before, even if that speed could not for the moment be known.

But Bonin continued to pull back on the stick, jerkily pitching the nose higher.

When the machines failed, the pilots couldn’t pick up the slack. Had their skills atrophied over the years as they had less and less to do in the cockpit?

Planes are simply insanely safe these days, The biggest problem seems to come from when the plane has to interact with the pilot:

[...] the accident rate has plummeted to such a degree that some investigators at the National Transportation Safety Board have recently retired early for lack of activity in the field. There is simply no arguing with the success of the automation. The designers behind it are among the greatest unheralded heroes of our time. Still, accidents continue to happen, and many of them are now caused by confusion in the interface between the pilot and a semi-robotic machine. Specialists have sounded the warnings about this for years: automation complexity comes with side effects that are often unintended.

Should planes just be completely automated?

Permalink


Why the Future Doesn't Need Us

February 02, 2016

Bill Joy is one of the world’s great computer scientists, He started Sun Microsystems, and he had a lot to do with the creation of the Java programing language.

In 2000, Joy published an essay in Wired magazine which has has become more and more influential over the years: Why the Future Doesn’t Need Us, I read this years ago (here’s my blog post about it on Gadgetopia, 13 years ago), but just re-read it for new perspective.

It’s depressing. Still as depressing as it was a decade-and-a-half ago.

Joy explains that he’s afraid of the future. We’re doing things with technology that might get out of control, Specifically, he discusses a trio of technologies he calls GNR:

The 21st-century technologies—genetics, nanotechnology, and robotics (GNR)—are so powerful that they can spawn whole new classes of accidents and abuses. Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them.

This is more dangerous that the technological basis of the cold war: NBC (Nuclear, Biological, Chemical), because of the threat of self-replication:

Specifically, robots, engineered organisms, and nanobots share a dangerous amplifying factor: They can self-replicate. A bomb is blown up only once—but one bot can become many, and quickly get out of control.

He’s not optimistic:

But now, with the prospect of human-level computing power in about 30 years, a new idea suggests itself: that I may be working to create tools which will enable the construction of the technology that may replace our species. How do I feel about this? Very uncomfortable.

Not at all:

This is the first moment in the history of our planet when any species, by its own voluntary actions, has become a danger to itself—as well as to vast numbers of others.

He ends on a positive note that we can control these things, but only through positive action. We need to police GNR technology at the same level as police NBC technology, But, in the 15 years since he wrote this, our inability to reduce nuclear threats to the world—or even compell countries to be transparent about them—has been limited at best.

In fact, the publication of the essay itself was design to launch this discussion, it seems:

My immediate hope is to participate in a much larger discussion of the issues raised here, with people from many different backgrounds, in settings not predisposed to fear or favor technology for its own sake.

His closing sentiment is thus summed up with a callback to the discussion of nuclear war:

We must do more thinking up front if we are not to be similarly surprised and shocked by the consequences of our inventions.

Permalink


The Gig Economy for Welfare

January 29, 2016

Two conservative Washington policy wonks write in Politico that we should use the gig economy to put welfare recipients to work:

Historically, some opponents of workfare have argued that work requirements are untenable because the government cannot find a job for every welfare beneficiary. That may have been true years ago, when a “job” was binary and full time, but today the gig economy offers the solution: It can easily and quickly put millions of people back to work, allowing almost anyone to find a job with hours that are flexible with virtual locations anywhere.

The Week calls this flawed:

You can break the amount of demand in the economy up into bigger chunks—fewer, more traditional jobs with high pay—or into smaller chunks—more jobs, but with lower pay. But the amount of demand in the economy is the amount of demand in the economy. And right now, as it has generally been over the last few decades, we flat out don’t have enough.

Basically, if you use these services to put a bunch of people to work, pay will drop since all you’re doing is increase demand for jobs. More people wanting (needing) them means that employers can pay less.

Permalink


Robot Farming

January 29, 2016

Japan is going to harvest lettuce with robots.

So much so that Spread is creating the world’s first farm manned entirely by robots. Instead of relying on human farmers, the indoor Vegetable Factory will employ robots that can harvest 30,000 heads of lettuce every day.

[...] Spread’s new automation technology will not only produce more lettuce, it will also reduce labor costs by 50%, cut energy use by 30%, and recycle 98% of water needed to grow the crops.

Permalink