Note: This is a more complete response to a YouTube video about AI and solving the “stop button problem.”
One of my professors once told the story of how AI latched onto the wrong thing when learning – it was learning to identify friend vs foe, and it was doing so using photographs. It seemed to learn quite well in the lab, but when tested in the field it failed badly. Turns out, the friendly photos were taken at night (the friendly tanks were equipped for night warfare), and the enemy tanks photos were taken during the day. So the AI latched onto that.
So an open question is how well an AI could really learn our rewards, and how to avoid it learning an improper reward, as the data could unintentionally contain a bias towards an unforeseen reward that might not be truly desirable.
I do think that being able to audit AI systems should be an important goal for developing AI. The whole “black box” approach of simply letting learning systems loose without knowing how they work and what they’ve learned – could come back to bite us.
The video also talks about cases where copying an expert might not be optimal, as even an expert might not always do things in the most efficient manner. I’m thinking that although that might be an important consideration in a game of Chess or Go, that might not be the most important consideration in real life.
In real life, some things break down when made maximally efficient. Mining coal and oil, for example: Sure, we have a huge demand for those products, but the global warming they are contributing to is going to have severe long term consequences. So maybe we ought not to try to maximize their supply.
The job market is an important consideration as well – should “let’s make my business efficient at all costs, including labor costs” really be a long term goal for businesses? Should the ultimate goal of AI really actually be more efficient than humans in absolutely all walks of life?
When all is said and done – is the net benefit of trying to be so efficient that AI takes over the jobs of everybody – is that really worth the cost?
The industrial revolution took away a lot of “unlikable” jobs – jobs that were largely mundane and boring except to a tiny minority of people.
The AI revolution promises to take away even jobs that people actually like doing. Why should that be a goal? Should efficiency for the sake of efficiency really be an unquestionable ideal?
I’m not saying that AI research should be stopped – but perhaps it should be redirected. Let’s not focus primarily on its sheer efficiency, but rather on how it can best serve us. After all, helping us (both as a collective and as individuals) is in my opinion the real goal here. AI and robots ought to be our servants, not our masters.
The best example of an AI system that I’d like to see is the Star Trek computer – at least, in the episodes where it is helping the crew and not going berserk. For the most part, it simply follows commands in a limited manner. We’re actually almost to that level with current technologies like the Amazon Echo.
Okay – so maybe I am saying there will be a “good enough” point with AI. And we’re sorta already there. But there aren’t really any more “big problems” to solve in everyday life. We already have the ability to top Maslow’s hierarchy of needs in for most people in many countries. The remaining big problem is really getting the bottom two rungs (physiological and safety) taken care of in areas where they aren’t being met.
The other rungs of the hierarchy of needs I would argue aren’t really something that technology is meant to solve: Love/belonging is handled by having friends and family, esteem is handled by modern societal structures, and self-actualization is something that can really only be handled personally and by the encouragement of friends and family. Technology can help bring in information and tools to help those areas (such as training tutorials and computerized tools for an artist), but the idea should be to help us, not to take over the tasks completely.
Is there really a net societal benefit to replacing artists with machines? Okay, I got more paintings on my walls (or more assets in a computer game), but complete automation of creating artwork only benefits a small number of people who helped design the AI and some management.
People who make artwork really get the short end of the stick if AI is creating the artwork. And it’s not a dreary job – artists generally like what they are doing. Is there really a net benefit to completely replacing their jobs with AI? I would say “no.”
I think it is a much greater societal benefit to have billions of paintings made by millions artists that love what they do, than millions paintings made by a machine that maximizes the efficiency of a business by minimizing the number of employees.
Unfortunately, our economic system is not set up to care about employment. It’s largely efficiency of the business at all possible costs, with employees seen as a cost rather than a benefit. Again and again, we see them cut whenever possible to maintain the business’ efficiency.
Our society should not exist merely to feed a few large businesses with large pockets and very few employees. If a business is large, it should be large in employment as well. We need to come up with ways to make jobs a net positive for businesses, rather than a boondoggle.
. . . and yes, that may mean sacrificing some efficiency or profit. But I don’t think that raw efficiency and profit should be at the top of the priorities list anymore, because the societal cost of too much efficiency is simply going to become overwhelming.