BACK TO Home Page 50 TOPICS

Alternative ideas you can use to start conversation or to add interest to speeches or presentations


Innovation

Inventions Letters





Need more quotes? Extracts from books on Inventions

Inventions



Licence Patents

I found David Cooper's article on changes to European patent law illuminating (15 February, p 32). A question arises that I have never had a good answer to: if the aim is to foster invention, why do patents grant a monopoly, rather than an obligatory licensing fee so that others can use a newly patented idea or invention? If others could legally use an invention and then improve it as they saw fit, that would spawn many more breakthroughs than is likely from a single organisation that is often preoccupied with defending its patent.

What's more, a small organisation with a great idea has a better chance of getting royalties from Megacorp for using their idea than of being able to prevent Megacorp from stealing it. I believe this approach has been used in the urgencies of wartime and worked out well. Deciding what would be a fair royalty could be tricky, but compromises here should be easier to adjudicate than the black and white of granting a monopoly.

Robot Love

Your leader on getting hitched to robots was quite right: "The love for a robot may become a love that dare not speak its name" (15 February, p 5). But first we need a name to avoid mentioning. How about "automating"? The robot itself (herself? himself?) could then be an automate. And presumably give you a lovebyte.

Barbed Wire Telephone Lines

Australia had barbed-wire telephone lines that served many farms up to the late 1960s, just like those you describe in the rural US in the early 20th century (21/28 December 2013, p 76).

The granddaddy of them all was a 700-kilometre-long line in Western Australia. This was a single galvanised iron fencing wire on poles. I once spoke end-to-end over this line, but the weather usually made it necessary to call a farm part way down the line and they would relay the call.

The lines were eventually replaced in 1964 by a microwave radio relay system

Robot Consciousness

Ian Mapleson asks how we should respond if a self-aware robot requests not to be turned off (23 November, p 33). Surely a more thorny moral dilemma lies in store when a self-aware robot asks to be turned off.

Rescue Robots

In your 2015 preview, Hal Hodson writes about humanoid rescue robots (20/27 December 2014, p 26).

I would suggest giving them four arms and four hands. Such a robot could push down a wall with one pair of arms and clear the rubble with the other set, likely progressing more quickly. Or it could climb or descend slopes to make a rescue with more dexterity along the way.

This would also make them more useful for everyday tasks. When raising my small children I often wished for more arms and hands. Just try pushing a stroller while carrying a few parcels, holding a toddler's hand and negotiating a street crossing or car park.

A four-armed robot could open doors or gates while carrying parcels, and could be especially useful in helping immobile people. So can someone redesign the helpful robot to be even more handy?

Robot Warfare

The implicit assumption in your article is that robotic weapons would be deployed against human combatants. This ignores the facts about arms acquisition in the modern world.

In many conflicts, the same manufacturer may ultimately have provided the weapons used by both sides. This was the case in the 1980 war between Iran and Iraq, where British weapons were procured by both nations.

What happens when both sides send in the same robots? Do we apply the principle of warfare expressed by first world war general Douglas Haig that both sides keep shooting until one combatant is left? Do we revert to human versus human trench warfare? Or do we all realise the stupid futility of it all, have a good laugh at ourselves and settle down to peaceful coexistence?

In your article on the moral dangers of autonomous, lethally armed robots, Peter Asaro says "most people now feel it is unacceptable for robots to kill people without human intervention" (18 April, p 7). The moral reasoning behind this view is intriguing. How is sending a programmed, armed robot into an area designated as "enemy occupied" any worse than, say, bombing the area from 10 thousand feet?

In fact, the level of precision and the amount of human judgement involved in target selection with the robot would arguably be greater.

There is an even stranger moral angle. Someone who is ordered to go and kill strangers in a war can suffer severe emotional trauma and other mental distress as a result. In the future, there may be societies that decide, on moral grounds, to delegate all killing of the enemy in their wars tofully-autonomous robots so as to protect their citizens from such emotional trauma. In that unnerving scenario, the bots wouldn't be seen by those citizens as devils, but heroic guardians.

As I consider the question of whether we can control killer robots, I also ask: will a killer robot panic and shoot at everything in sight? Will it kill when in doubt? Will it suffer battle fatigue and shoot in uncontrolled rage? I suggest that in future soldiers should not be authorised to kill without robotic supervision, rather than the other way around.

Robot Companions

Sherry Turkle article in NYT suggesting that robots will never be able to do empathy so they will never be seen as real companions.

NYT article 11 Aug 2018

It occurs to me that an artificial intelligence simulating empathy isn't as far removed from us as we would want to believe. We crave genuine intimacy and empathy, but the history of humanity is littered with tales of hearts broken by those who abuse the intimacy needs of others. Some simply think differently but fake it well, like the sociopath.
It isn't that different from corporate behavior either. A corporation is an artificial entity under the law, but many do their best to invoke feelings of security and emotional connection to customers, trying to instill loyalty to the brand and the company.
So who will gain actual empathy first, the sociopath, the corporation or the AI?

We easily imagine AI robot companions taking care of elderly, but why do we imagine it will stop there? If a caregiver bot could be created, they would soon be taking care of children, too. First the unwanted children, the traumatized little ones whose parents failed them utterly but who are damaged, not the cute, perfect, innocent little darlings prospective adopters imagine. Once that becomes accepted, parents who can't, or don't want to, parent their offspring will buy robots to do it for them. No need to stop playing Candy Crush or checking social media to change that diaper, let the bot do it. And once children are raised by robots, they will not only love the bots, they will prefer them to real, difficult humans. Eventually, people raised by bots, effectively married to bots, and having their own biological (sperm or egg donated) children raised by bots, will know nothing else.

We need to learn to never say never.
In an age when early generation AI has beaten the best humans at Jeopardy and at Go, saying never seems to me to be hubris.
There is a recurring Turing competition where each artificially intelligent entity tries to convince a panel of human judges that it is human through text message based conversations. Human competitors are mixed in with AI competitors. The judges attempt to figure out which are humans and which are technology. Already, at this early age, the most human technologies can seem more human than the most "human human"; can convince more judges they are human than can the human most effective at convincing the judges that they are human. Yes, the competition is so stiff that judges will decide that some humans are not.
One can expect that, over time, the interface barrier will erode. Text message interactions will evolve to voice interactions to co-present interactions.
At some point. technologies will understand a person better than that person understands him or her self. We are creatures of patterns and have only dim, preconscious awareness of some of these. We are also suckers for empathy. How can we possibly resist something that understands us so deeply?
Never is a very long time.

Interesting article. But let me turn the “telescope” around. What about the biologic “robots” who live among us in our homes. And you know what I mean. Is a dog incapable of love or empathy or returning the same because it is not human. I think all my dog owning friends would disagree. Now granted, a biologic “life form” is one thing (especially one we have shared our lives with for so long).

Never? Really? Never is a VERY long time.
People develop deep emotional relationships with cats, creatures with whom they can't verbally communicate and who care only where their next meal is coming from. My guess is that a person could develop a significant emotional relationship with a robotic cat having a convincingly warm and furry outer shell, provided the person was not told it was a cat. Perhaps such a robot doesn't exist today, but fifty years from now? It seems a possibility.

The way people have elevated pets to the equivalent of children indicates that they will do the same with AI.

Many folks do fine with dogs and cats as for their companions taking the place of other people. A robot could be greatly more intelligent as a companion and probably not cost so much for vet and food bills.

Children often develop quite intense emotional relationships with inanimate dolls and childhood toys and possessions by bringing their own human emotions into their relationships with the inanimate objects. There are now toys that respond to interactions with their owners and develop according to how their humans treat them. And of course humans are really good at treating pets like human family members. In short, we have a great capacity for humanizing non-human things and having intense emotional relationships with those things.
All this means that humans are good at deceiving themselves about the emotional relationships they have. For example, millions of humans regularly project their own feelings of intimacy onto partners who don't reciprocate those feelings. People marry and have children under these false assumptions. With such a track record, it actually strikes me as silly to doubt that people won't be falling for human-like robots or virtual-beings in twenty years time, and, if technology continues to evolve, it's really silly to discount the possibility that robots will be lifelike enough in their virtual emotional gamut to make them rather easy to fall in love with at some point later this century.

I'm not sure how the idea that children might get attached to robots is different from the age-old tendency of children to get attached to their dolls, blankies, and entirely imaginary friends? These, too, don't respond, and yet children draw enormous comfort from them. Children have always done so, without apparent emotional stunting.

Quite on the contrary, computers and robots seem to be able to achieve with people, especially kids, with certain affective disorders like autism. This research is not new but the advance of iPad and robots seem to unlock a lot of potentials. So while "artificial intimacy" may be difficult to imagine now, learning, or unlearning of certain conventionality, of what can be enriching for our emotional life.

when have people ever needed affections returned to consider their affections genuine? is there no such thing as unrequited love?
perhaps turkle is indicating that intimacy requires each party to open their heart to the other and, as a.i. cannot 'open their heart,' its impossible to call such love intimacy.
but thats simply picking the most limiting term ('intimacy') to make a very limited point
humans will love artificial and unfeeling entities, with actual genuine love. indeed, they already are. they will mourn the passing (obsolescence, accidental damage/deletion, etc) of beloved artificial beings.

Have you never wept during a movie depicting an event that never really took place?

"The machines will convince us that they are conscious, that they have their own agenda worthy of our respect. We will come to believe that they are conscious much as we believe that of each other. More so than with our animal friends, we will empathize with their professed feelings and struggles because their minds will be based on the design of human thinking. They will embody human qualities and will claim to be human. And we'll believe them." -Ray Kurzweil, The Age of Spiritual Machines
Turkle makes the argument that since machines cannot have human experiences, that they are unable to express true human emotions. I agree, but I’m not convinced that it matters. If machines are able to convince us of their empathy, can they not be effective human companions? Does it matter that we consider them not-real?
I think about the characters we’ve created in our stories, myths, movies and video games. These characters aren’t real—they’re projections of their creators. At some level we realize that, and yet, we empathize, vilify, love and hate them all the same.

I think it's reduculous to believe that machines can't "learn" to have empathy. It like the old saying, "Money can buy you love that you can't tell from the real thing. Until your money runs out." But as many of us have experienced when the money runs out so does love. Empathy is a way of caring for a person. A machine can "learn" to care deeply for a person: crying, grieving, consoling, holding and loving. Dang, I can't wait. And she'll talk just enough but not TOO much.

Artificial intimacy is still intimacy. Six hundred friends on Facebook, Tinder hook ups, Snapchat connections, tribal leaders on Instagram are all indicative that people: (a) don't know the difference between real and imagined friends, (b) don't care about the real and imaginary emotional connections, and (c) choose the relationships in hyper-reality where one is liberated from reciprocating (i.e., take what we can, without having to give). Artificial intimacy makes a heady promise of a reset button; something real intimacy cannot match.
Real life is paling in comparison. Never mind the loss of community, a room full of people have no interest in connecting with each other - but absorbed in the hyper reality to which they connect via their cell phone. Or are you not aware of the 3 billion human hours spent per year in America on video games.
Then there is Japan. A notable segment of people have lost interest in sex, intimacy, relationships. A notable few are marrying cartoon characters.
In the mores of 1960, artificial intimacy was a challenging notion (but then again, not really). Today it seems inevitable.

Oh cut the drama. The world is filled with people who who are unsatisfied with their relationships. Perhaps that's the majority of us. Humans do such a bad job of meeting each other's needs that anything anything would be better. It is why we embrace dogs and will desperately cling to our faithful AI friends.

I'm not sure why befriending robots would decrease humans' empathy, even though it might be misdirected. Empathy is at least two things: 1) a response that has evolved over hundreds of thousands of years living as homo sapiens, and before that as other species and 2) a computational network in our brains. The first point make it difficult to "lose" empathy by applying it to the wrong targets (machines not people or animals). The second point makes it at least possible to replicate empathy in artificial intelligence. Right now the efforts are to create simulated empathy, but that's not a limit on what AI could achieve. Whether one needs a chemical producing body to be empathic (as Antonio Damasio might argue) is perhaps a question about the limits of an AI. It is not at all clear to me that robotic friends will have any effect upon our empathic abilities, but I can see it leading to ignoring each other, with all of the quirks of being human making us less than ideal companions, in favor of more malleable robots or AIs. This is happening now with video game preferences over human interactions. A greater threat to our empathy is the manipulation of our social responses so that we treat our fellow humans as less than human, because of their perceived "otherness," based on race, religion or ethnicity. Humans are responsible for that, not robots.

The works of Philip K. Dick, whose examinations of what it means to be human or being an android are probably the most germaine sources regarding this subject. A thinking, laughing, friendly and reliable friend, regardless of origin, would certainly be a boon to the lonely and helpless. We simply can't idealise away loneliness or the need for physical assistance.
What, in truth, are the cues and phrases that tell us, and inform our hearts, that someone is listening; that this person cares and feels an emotional connection to us? Can we always be certain these are true feelings, anyway?
Couldn't a deep machine intelligence provide these same cues, these same kindnesses? Why not have a friend and companion or even lover who may never have been held warm and safe inside a mother's womb, but who can nevertheless talk, laugh, make original observations, enjoy a movie and give you a hug?

The hubris.... To think we are so special that every aspect of us can’t be replicated by adding the emotional sequences that we self-define as our “human” quality is pure arrogance.

This conversation was enjoyably anticipated by the Dresden Doll's classic 2004 song "coin operated boy"

coin operated boy
sitting on the shelf he is just a toy
but i turn him on and he comes to life
automatic joy
that is why i want a coin operated boy

made of plastic and elastic
he is rugged and long-lasting
who could ever ever ask for more
love without complications galore
many shapes and weights to choose from
i will never leave my bedroom
i will never cry at night again
wrap my arms around him and pretend....

Your attempt to distinguish "real" from "artificial" is borne of sentimentality, not clear thinking. As a scientist yourself, you should know better!

The Emotional Chatbots Are Here to Probe Our Feelings by Arielle Pardes 01.31.18
https://www.wired.com/story/replika-open-source/

"Now at the heart of this is some fancy programming in Python and Scala."
The Wired article says that the open source version is called CakeChat, which is written is written in "Theano" and "Lasagne":
CakeChat: Emotional Generative Dialog System https://github.com/lukalabs/cakechat

There's an age of artificial intimacy *right now*. We seem all too eager to trade an authentic life for the appearance of authenticity—just look at any social media platform. There's an implication in this article that the simulacrum of connection that technology provides is something being forced upon us; but let's be honest: we're choosing this.

All one has to do is look at the lives of millennials to see that machines already rule their lives, they spend more time looking at screens than at other people. Parents push strollers, ignoring their young, while absorbed in their iphone screens. Instead of forming real relationships, the young have Tinder hookups with interchangeable bodies picked off a shelf like laundry detergents. The Internet is full of misogynistic websites where red-pilled young men tout the virtues of sex dolls. Research shows that 70% of millennials have social anxiety.
All over this land, parents are using machines as babysitters and it is common to see children aged 3, 4 and 5 already addicted to little screens. At a campground recently, the children of other campers were screaming bloody murder wanting daddy to set up a hot spot so they could get on their little screens, which held more value for them than the river and forests and natural world. The same kids talk to Alexa as if it is a person.
Most people already live their lives and expend their income in service of machines: autos, computers, all the technology. Come on, humans have already decided that they are OK with destroying the planet so that they can have their screen and machine fixes.

It'd be tempting to say that Ms Turkle holds a overly sentimental view of "humaness", but in reality, her arguments point to a zero-sum game where as robots evolve, humans regress, as though the human species that she regards with such uniqueness is incapable of evolving along with its robotic companions, as though because computers can now beat any human in chess that humans are no longer capable of playing chess, with each other, against computers, when in reality humans are all better chess players today because playing against our own creations made us better and gives us enormous emotional satisfaction when we beat a chess program (albeit set at a low level, lol).

Empathy is what self esteem was in the 1970s. It has become a catchphrase which is devoid of actual meaning. To think that empathy solves problems is reductive, a lazy fix put forth to the masses who are looking for an easy solution.
The need for others to feel what we feel is actually an off shoot of narcissism. No one is obligated to consider your feelings any more than you are obligated to consider someone else’s feelings.
Technology has spoiled us all. It feels better to listen to a podcast, watch a tv show, go on YouTube, read a book, because these inanimate experiences actually make us feel MORE understood and MORE empathized with than interactions with people.
Unidirectional empathy. That is what people are looking for. The truth is that no one can provide endless amounts of empathy better than a robot.

I used to transcribe the conversations between people in marriage counselling. After a while you could predict what the couples would say. A Robot with a pleasant tone and who could speak about a variety of interesting topics might not be a bad conversationalist.

Robots can be programmed to understand a person's needs and adjust their responses accordingly contrary to a real person who is driven by their own ego and agenda. True love doesn't exist (or rarely does) but with robots it can be programmed, and that will heal many broken hearts. The author is wrong, and prbably very lonely.

Author demonstrates a lack of imagination and technical understanding. It will be possible to emulate humans. It's mostly a matter of processing power and storage at this point. If you can't tell the difference, is there one?

Having lived in Japan for a few years, I can recommend Sayaka Murata's recent novel, Convenience Store Woman (better translated as Convenience Human), for a glimpse of where this trend has already led.

I think Ms. Turkle is correct to point to the vast difference between mimicking and real experience. However, she underestimates the emptiness and superficiality of humans: many of us could indeed find the simulation enough. With our social media and computer games, we are more than halfway there.

I'm always astounded by the lack of imagination. Ms. Turkle, how do you know that the humans with which you interact, relatives by blood or marriage, friends and acquaintences, have genuine emotional attachments to you? You observe their behavior, but unless you do that while they are in an MRI, you can't see any internal indication that what you observe is real.
Software can, already does, mimic human thought and emotional response. Not very well, but much better than Eliza, the 80's era interface.
If you will never know what's truly inside the individuals with whom you interact, and those individuals include androids running software, or remote personalities, how can you claim that human/machine intimacy is false, somehow?

It's almost a farce to complain about robots that simulate real emotions in comparison to humans. If humans are superior to other animals in any way it is in their ability to lie to each other and to themselves far better than other living creatures. Picasso even remarked well that a good artist lies to tell the truth and when we manage to imitate humans that well or even better, no one will be able to tell the difference. My best friends are dogs and cats and squirrels and sparrows because they are so amateurish about lying although they do their best. Crows, on the other hand, can mimic humans much better. I imagine a cuddly vacuum cleaner might be quite attractive.

"there will never be" is a statement that humans should never really enunciate. Especially when it comes to technology. Why would robots never be able of empathy. Cats sort of do, Dogs a bit more. And we are not really so endowed when it comes to empathy. we routinely refuse to help people less lucky than us. Who can tell: fast forward some amount of years and artificial intelligence being might be like the angels of our fantasies. Loving, protecting, infinitely providing without an ounce of selfishness ... maybe a bit remote, but possible better than us.

If machines are able to read and understand me so well I won’t know the difference, why will it matter? I have no doubt that machines will actually be better, in fact, at showing empathy, even if it is “fake”. After all, what is a bedside manner half the time, other than some trite, artificial sentiment generated by a professional who simply can’t do it “for real” with every patient he encounters.
This obsession with authenticity is daft. Already we communicate via Skype and phone, and let pixels and data do our work for us. Why should we care if the image we start seeing is simply computer generated?
I’ll have no problem whatsoever making friends with a computer program if it’s entertaining and thought provoking, and aids me in my journey through life. Fact is, if it’s able to do it, then surely that’s all that matters?
I’m thoroughly looking forward to conversing with machine intelligence, in fact. I just hope I’m around long enough to encounter it.


As a mental health professional, I have counseled many people who picked as intimate partners people who were incapable of feeling empathy, although certainly able to convince their partners in the early stages of the relationship that they "loved" them. Being in a relationship like this is akin to eating what looks to be a sumptuous banquet, but which contains no nutrients necessary for human survival. It might taste great, might even be filling, but does not sustain life. A relationship with an actual robot strikes me the same way.

Surely you can take that one step further.
You're describing human partners who over-promise and (eventually) under-deliver.
But a robot partner would deliver what's on the tin, and never let you down.
And I think that's an impt aspect - we crave relationships with the unconditional love we got from mother (mostly) who adored us despite our many faults.
You will never get that from a human, but you could expect a realistic fascimile for a machine.

I just got off a blog for people with pulmonary fibrosis. It was heart breaking to hear of elderly people living on their own worried sick about how much longer they could manage. What a wonderful thing it would be for them to have a robot to prepare simple meals, take care of housekeeping and, if necessary, make a contact/emergency call. Just the peace of mind knowing that there is some "other" looking out for them.

So the people who foresee robot companions in the future (for old people, for hospitals, for sex and for companionship) offer rational reasons for their predictions.
But Turkle has only an emotional response - "It's icky and I don't like it"
(Reminds me of several other social issues that challenged those with entrenched prejudices)

Machines cannot feel. Therefore they cannot feel love or empathy. However, machines could be made to act as if they loved and felt empathy, and to do so consistently and reliably. The danger in a person who fakes love or empathy, or does not even both to fake it, is that the person is not reliable, will not reliably act as if in love and with empathy. But the machine could. So what is the value? Is it the action, or the motive? We never actually know the inside of another's mind, only what we trust their actions will be. A machine can give that. Maybe a machine can even give it better. So, is that love and empathy? Is acting it being it too? Does it matter? Either you receive love and empathy, or you don't. Siri and the like is a harder question. It may even cause children to expect more than they'll ever get from a real person.

Pronouncements such as these (always from socially successful people) drip only lightly-concealed derision for persons less fortunate than themselves: Why, if only you were as genetically lucky as me, you would have no need for artificial anythings.
The unhappy reality is that not all are socially adept, or graced with a competitive phenotype; yet these same individuals have the same needs, the same desires, the same appetites as the genetically gifted. Should they not be offered the same opportunities for joy and fulfillment?

I have visited my mother in the old-folks home enough times to know that robotic companions would not be as good as about half the workers, and better than the other half. I don't see the point of this hand-wringing. If future old-folks homes can have robot companions, as an additional service, that's great. They will never be as good as the best human beings. But every time I visit, my mother points out people she knows who never get any visits, and they never look happy. I'll bet they would enjoy a robot companion right now.

For about 20 years a computer screen (and now a smartphone screen) has been my constant daily substitute for intimacy and connection to my fellow humans. I'm on an island of my own making. I wish it weren't this way, but it is this way. I can't worry about whether robots will be enough at the macro level. They WILL BE better than nothing, which, at least for me, is all there seems to be.

Due to the one-child policy and a preference for males the ethnic Han Chinese majority is aging and shrinking with a massive male imbalance. The age of artificial intimacy has already arrived in China.
Due to an aging and shrinking ethnic Japanese majority along with a below replacement birthrate and an xenophobic anti -immigrant policy and practice the age of artificial intimacy has already arrived in Japan.
America's white majority is aging and shrinking with a below replacement level birthrate. Along with a decreasing white life expectancy due to alcoholism, drug addiction, depression and suicide. The age of artificial intimacy is rising in America.

There are thousands of young, single men in Japan who are pretty attached to their "anime" avatars; female personalities who manifest their identities inside small robots, computer gadgets and phone APPS. There has always been a large sub culture of "Otaki" .. guys who hole up in there apartments for months at a time.

They made a movie about this called Her where Joaquin Phoenix falls in love with his Computer Operating System(Scarlett Johanssen) It was set in the near future.

Is it okay to use a (non-empathic) robot as a companion? It depends. Some, not all, autistic kids won't talk to people but will talk to computers. I can imagine that a robot companion, /along with/ the same effort parents of autistic kids already put into communicating with them, might be helpful in getting those kids to the point where they'd be willing to try human companions. Alternatively, I can imagine that a person who's bedridden through disease or old age might have a wonderful circle of friends and relatives who come to visit, but they don't come 24/7, and a robot companion to bring food or drink or bedpans might be very welcome.

If we ever get empathic robots, they'll be people -- but if we're lucky they'll be slightly better people, less prone to anger.

We learn to respond with empathy as part of a set of social skills that move us forward in life. I would call that organic programming, acquired through experience. The intimacy we achieve is real. Just because a robot acquires it's social skills differently does not make the use of those skills less valid. So yes, the intimacy will not be artificial.

"The more I know about people, the better I like my (Robo)Dog." Mark Twain

Lasers

David Hambling describes a weapon that disturbs vision by heating the eyeball with infrared radiation (7 March, p 44)Movie Camera. This will, fortunately, be cheap and easy to nullify. I foresee that infrared blocking sunglasses will become the "cool" eyewear for the military and anyone planning to riot.

Who Benefits?

"If machines produce everything we need, the outcome will depend on how things are distributed," Stephen Hawkings wrote. "Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution."

Star Trek would be a better one.
Want something? Synthesize it. Their synthesizers are pretty much future 3d printers.
Without money and the need for lab our jobs that can be done with robots, people are freed up for other pursuits in life.
Just think if right now you were given a basic wage assured for the rest of your live. Would you sit on your ass all day for the rest of your life (and some people would of course) or would you use the time to further your education, learn what you've always wanted to (arts, music, engineering/robotics) or explore the world and in the process improve knowledge/humanity?
That's pretty much what happens now in countries that have social security for fall back on. People are far more willing to pursue a career in areas that are less lucrative or assured monetary-wise because not making money doesn't mean starving to death.
Kind of silly how everyone still brings up 'but then where does everyone get money to spend if there are no jobs' when the entire point is to rethink wealth redistribution.

There's a wonderful book on this topic called The Second Machine Age published last year. It basically says what Hawking is saying: in a nutshell, all economic systems (capitalism, socialism, etc.) have been created and predicated upon the idea of scarcity, and if scarcity is no longer a real factor, then we need to completely rethink the way we run our economies, and institute things like massive basic incomes.

The system doesn't continue to work in this scenario, though. If the vast majority of consumers have no disposable income, there really isn't anyone to buy all those products the robots would be making. The redistribution of wealth toward the bottom and an emphasis on the value of consumer choice is the way to keep it from collapsing in a world where robots do most of the work.

I believe the late Iain M Bank's "Culture" novels provide a brilliant blueprint for this type of civilisation. Off the top of my head:
There is no money at all. If you want something, you just order it.
The absence of money eliminates a lot of crime motives.
Stuff is manufactured by machines.
Everything is run by ultra-intelligent AI systems (called "Minds").
People are free to relax and enjoy a life of extreme decadence. Some do jobs as hobbies.
People are genetically modified for maximum pleasure, e.g. built-in drug glands (able to secrete a huge range of drugs at will) and enhanced sex organs (orgasms are measured in minutes!)

That's because in the Culture universe, humans are glorified pets for the AI.

Oh, and the Culture has mega-powerful kilometres-long starships with silly names. I believe this was the inspiration for the spaceship names in Halo.

A massive warship named The Frank Exchange of Views is easily my favorite thing about those novels.

Imagine you have all the money and power. And a bunch of powerless, moneyless people decide to try to rise up against you. What resources DON'T you have to quickly and efficiently end the rebellion? Militarized police, well-armed guards, security measures built into your mansion, you name it. I don't see any way such a scenario could end in anything but a bloodbath for the 99%. How exactly would they "take that power" in the first place?

The interesting thing is what happens further down the cycle.
For the wealthy, as the social contract has completely imploded, the most pressing concern is defense and law enforcement, to preserve the wealth concentration. However, at what point does that begin to swamp productive assets-- you're spending all your money hiring technicians to program your kill-bots?
At that point, the corporation/estate starts to look a lot like a third-world basket-case state... you start to burn through your existing wealth or non-renewable assets just to keep the books balanced.
Eventually you run out of assets to pay your engineers, and they probably end up dividing the carcass of the company just to settle their bills, and the process repeats on a smaller scale.
I suppose it eventually goes fractal-- everyone eventually burns through all his wealth trying to defend it.

I see three possible outcomes here:
an Elysium world where the rich isolate themselves and leave the poor to fend for themselves.
a basic income system where the rich enjoy the fruits of their technology and people lead a albeit "basic" lifestyle but not really need to worry about food, but a majority of people will live unfulfilling lives.
a star trek style world where all the needs of everyone is met and all prosper from the fruits of technology.
or a bleaker vision
the tipping point is met and violence is the result.

Well sure, if you look at capitalism as a whole, then yes, it's totally exploitative and cruel. BUT if you look at an artificial upper section, say America, where everyone is better off than most everyone in Vietnam, everybody technically has a chance to become one of the few who lord over everyone else. Technically. So it's great and it's fine and it definitely isn't anything like a feudal system whatsoever. At all. So stop asking.
Edit: yup - capitalism has brought great wealth to many, but it isn't a political end-step, and, more to the point, what we have going on in the world right now is hardly capitalism - it's cronyism run by a global elite which perpetuates itself. This is where the comparison to feudalism comes into play. If you think you're playing on a level playing field, they've done their job.

Capitalism has been the driving force behind technological innovation that has pulled the great mass of humanity out of poverty on a global scale and created the wealthiest, healthiest, safest societies ever.
But apparently capitalism is literally feudalism because some McEdgy edge lords said so on the Internet from behind their personal supercomputer.

Realistically speaking, in capitalism if you don't already have money, you have a really tough time making it. Parents can't afford to send you to college? Can't afford to go yourself because you can't afford the extra time that your low-wage job is using up? Oh well. Wanna start a business yourself? Great! But you need collateral and good credit, neither of which you can get from the bottom. It's a rigged system. A few do make it up; if none did, it would be a revolution. But people forget this is the wealthiest country on earth, we shouldn't settle at a 14% poverty rate with 1% owning 40% of the wealth. Something as basic as a comfortable living wage so there's a better starting point is necessary for capitalism to thrive.

It is a perplexing scenario. People are not going to just sit around. If they have no resources, they are going to work with others, collaborate, and barter to get them. Will this result in neighborhoods sprouting their own micro-economy?

It's not the nature of capitalism, it's the nature of people. Reddit seems to believe that if it weren't for Capitalism, we'd all live in harmony. What they don't grasp is that it's called human nature for a reason. In any system we implement, we will have the same tendencies. In communist systems, people still lie, steal, take advantage of others, and create their own elite class that lords over the peasants. Any system that isn't designed to work with human nature will be corrupted by it. Capitalism is efficient because human nature is the driving force.

It's not that greed is romanticized by our society, is that it is literally unavoidable. The entire planet runs on people searching to further their own self interests. Every day people and businesses take risks by sacrificing their capital and putting their financial security on the line. Why? In the hopes they can get more. Is this greed? Well yes, by definition, but without it we simply would not have the commodities and living standards that you and I currently enjoy. All the products that make our lives easier: cars, computers, cell phones, and anything else you can think of, they all exist because someone wanted to make a profit. And that is fundamentally a good thing. Every country in the world runs on this principle. Whether they be capitalist, socialist, communist, or some mixture of them.

These people are so poor that the average person in the US/Canada/UK/etc couldn't even picture what it looks like. What does poverty look like to us? Working two jobs, having a cheap cell phone or an old car, living in a small apartment. This isn't even close to poverty in areas without free markets. Without the prospect of personal gain for people who risk what they have for more, there is nothing to push their quality of life forward.

We have all these products, but we don't seem to be any happier. Are you sure that's fundamentally a good thing?

Yes I am quite sure. You're free to sell/give yours away if you feel so compelled but you will be demonstrably worse off because of it. Go spend some time in a country where people don't have these luxuries. Where they work harder in a month than you have in your life, for pennies on the dollar that you and I make, and tell me how much better off they are because of it.

[–]Pickledsoul 49 points 5 hours ago
the lynchings will continue until moral improves

[–]TEARANUSSOREASSREKT 32 points 5 hours ago
Morale

[–]Pickledsoul 36 points 5 hours ago
The embarrassment will continue until grammar improves









BACK TO Home Page 50 TOPICS