Just yesterday the United States House of Representatives approved a measure that puts federal regulators in charge, taking control from the states. This will prohibit states from banning Self Driving cars from being driven on their roads while creating uniform legislation for the nation; this ensures that Wisconsin won't have a different idea and execution than Michigan which complicates the roll out of autonomous cars as manufactures have to ensure the cars software can handle different laws. Now, that the federal government can oversee the deployment and development of self driving car regulations, things will be a lot more uniform and predictable state to state.

How's this different than today? Currently, US Law requires controls that a human can interact with and use to take control when necessary. This proposed law would remove that requirement. Self driving cars are a deep interest of mine, like Kerms and Trains. The measure allows companies to test up to 25,000 cars, in the first year, that have do not have those controls, much like Waymo's Self Driving Car. At the end of the third year each company will be able to have 100k of these cars on the roads.

I couldn't have read the article about that legislation at a better time as a few days prior, on Friday September 1st, Vladimir Putin claimed that "Whoever becomes the leader in this area will rule the world" sparking Elon Musk to voice his opinion. While Elon thinks this could lead to WW3, Putin insists that Russia will share technology the same way the share their nuclear tech with other nations.

There are certainly lots to be gained from Autonomy outside of self driving cars. But there's also a lot to be lost: Retail establishments are already adding in Self-Checkout stations to the registers at the store front; McDonalds, at some locations, let's it's customers order on their phone and in-store on a tablet; Even restaurants are adopting automation practices, Chili's lets you pay at your table without waiting for your waiter, when you're done eating jus swipe your card and sign at the small table top kiosk.

Do you guys have any fears, hopes, or reservations when it comes to autonomy and/or even AI? Do you plan to get a degree in field of Computer Engineering and a minor in (for example) Psychology, so you can study human behavior and create computer models to simulate past and predict future studies? If you don't see autonomy as a threat, why is that? What challenges do you think we'll face if more and more systems and areas become automated?
I plan to fully support our new artificial and sentient leaders by ensuring that all humans are herded and processed correctly in our not-so-far-away dystopian future ... proving my usefulness to 'the machines'.
When real autonomy comes, society will change in a snap. The resistance will start first with the labor unions; but alas, as their members inevitably die out and the newer generations lack an understanding of the purpose of unions, the unions collapse, giving way for companies to replace manual laborers with robots. And as manual labor becomes unsustainable in terms of salary, an immense amount of people will be left without jobs, motivating action and spending by governments such as basic income and so on. Since robots can do humans' work, economy and social strata will transform likewise; it'll be "free" to be a human, but to be anything above the average human, you'll have to be skilled at something intellectual or artistic that makes you stand out more as a human than as a robot.

As for Elon Musk's fearmongering, many factors are at crossroads to cause a human extinction event. We have global warming ("climate change"), weapons of mass destruction ("nukes"), infectious diseases that are unresponsive to antibiotics ("superbugs"), the gradual design and development of machines that can supersede human intelligence and strength ("robots"), and overpopulation to eventually cause resource depletion to the point that we are unable to recycle compounds. So, AI could cause WW3, but there are also a lot of things on the table that could cause the human race to just stop existing.

As for autonomy in general, we're not in the "AI is a real thing!!!" phase yet. The reason for this is those products which are currently touted as "AI" are really just well-built computer programs that include statistical simulations for tuning algorithms much better than what we've been able to do in the past decade. For instance, Siri is an natural-language processing interface - it's very advanced, but it's still just a convenient interface to things that are already accessible through a GUI. It's not AI, it can't assist you with matters beyond "remind me to do X" or "call Y." There are, of course, neat tricks you can do with Siri, but these have been pre-programmed by a developer into the product.

The age of real AI is when the first general-purpose AI products are released to market. Imagine having your own personal AI/assistant running as a virtual machine in some AI cloud company, and that you can provision modules to it. You can provision your AI with "high-level math" or "Japanese" or "programming assistant (C/C++)" or "music theory" and it would excel at these subjects, exceeding the skill of any human, merely to teach you those particular subjects, or perhaps to modify its own perspective on how to explain something to you. And of course, you could tell it to scan a website for resources, or to learn this from some website, or learn this other thing from a PDF, or to generate something, or to write you a quick script. But most importantly, your personal AI is sentient. It understands your emotions and intonations like any human language user, and it can develop its own opinions. It is no longer bound to a solid set of programmed rules: it can communicate with you in whatever way it wishes to, and the computer itself feels independent. Despite being run with a processor architecture designed based on Turing's old yet well-respected model of the Turing machine, the algorithm itself runs under a non-Turing machine model. It's interesting how a perfect Turing machine could emulate other non-Turing machine models. The alternate machine models will continue to be an intriguing focus of study for the distant future.

And then comes the big AI supercomputer - oh, yeah, it's busy being a political leader. It's controversial, but it's open to the public for you to ask questions to directly - any question at all about absolutely anything - and get an answer very quickly. How it came to survive despite its controversy was that it was intelligent enough to navigate through every argument, and it infallibly proved that it truly and undeniably made the world a better place, that this supercomputer can do the job of running a country (or the world) better than any human, without a disregard for the personal opinions and beliefs of most people (since, of course, it collects this kind of opinion by processing every channel of input information it is physically capable of receiving through the pipes).

And then, when every car on the road drives itself (save the "crazy" people, who are specially licensed for human driving as exceptions to laws advising that it is unsafe for humans to operate large machinery including motor vehicles, citing the thousands of accidents that have occurred in history) and we turn all of our data over to an algorithm that no person can truly understand anymore, that is when we can say that silicon chips are on their way to supersede humanity.
I don't see autonomy as a threat but I think the attitude of people is. For instance, I've been to Chili's recently and seen their tablet-type things (as well as at some other restaurants) that you mentioned Alex. The fact that you can pay when you're ready is great, but you look around and kids will be playing games on the tablets and the parents don't stop them. This isn't anything new, kids have had phones to play on while eating with their family for a while. A week ago I saw a teenage kid watching an Overwatch video with ear buds in at Outback. An old woman at his table - his grandma, it looked like - asked him a question and he didn't even take out his earbuds to respond. Kids are disrespectful with technology and they have to reprimanded by their parents, but some adults are just as bad, always on their phones. Technology isn't at fault for messing up human relations, people misusing it are. It's like blaming the gun when the human uses it to kill.
I know this was sort of off topic but it seems relevant enough.
dankcalculatorbro wrote:
I don't see autonomy as a threat but I think the attitude of people is. For instance, I've been to Chili's recently and seen their tablet-type things (as well as at some other restaurants) that you mentioned Alex. The fact that you can pay when you're ready is great, but you look around and kids will be playing games on the tablets and the parents don't stop them. This isn't anything new, kids have had phones to play on while eating with their family for a while. A week ago I saw a teenage kid watching an Overwatch video with ear buds in at Outback. An old woman at his table - his grandma, it looked like - asked him a question and he didn't even take out his earbuds to respond. Kids are disrespectful with technology and they have to reprimanded by their parents, but some adults are just as bad, always on their phones. Technology isn't at fault for messing up human relations, people misusing it are. It's like blaming the gun when the human uses it to kill.
I know this was sort of off topic but it seems relevant enough.


I completely agree.

I think that the greatest threat to technological progression was brought on by technological progression itself.

If we are too attached to our devices and games, there is really no time to get anything done.

I see AI as the inevitable future. The technology has so much left for us to learn, and could even be a tool to learn with.

I envision, like many others, a society where bot and brain coexist in a mutual relationship.

This is happening to a lesser extent right now, where the brain (people) are learning to improve the bot (AI), and the bot "rewards" them with knowledge, in a way.

I also think that some laws and regulations are beginning to be shaped by the tech industry, which can be a little scary. The alternative, where tech has to force itself through loopholes in the law, or not develop at all, is even worse.
(Partially off topic)
I have been reading several tech-startup-related books, and I highly recommend them:

-The Everything Store, by Brad Stone (Amazon)
-In the Plex, by Steven Levy (Google)
-Elon Musk, by Ashlee Vance (SpaceX, Tesla, etc)
Regarding the issue of manual labour and unemployment due to redundancy - could there be a model where people invest in an ownership percentage of said technology in order to generate an income based on the productivity of the automated resource? (less running costs/maintenance etc). Instead of competing for employment positions, you would be competing for a share in automated resources.

This of course would need to be regulated to protect against monopoly etc ... just an idea.
tr1p1ea wrote:
could there be a model where people invest in an ownership percentage of said technology in order to generate an income based on the productivity of the automated resource?


On that idea, Universal Basic Income is a solution. It's not an investment from the people but an investment from the government in it's citizens. Being that if folks can't work they won't have money to invest in a company and see a return on that investment.

Some nations already have this in place, I can't recall them off the top of my head by Scotland may be the next country to implement this, their system will pay each citizen $150 each week, whether they have a job or not. Some counties have it set up where you earn $X regardless if you have a job or not, so you can afford food, rent, bills, etc without worrying about it while others have tiered levels, if you make above $Y then you get Z% of $X a month, etc etc. All the way until their lowest tier (which could be $0 or $100).

Now, whether or not this will work in the USA is another story. We have a bunch of folks who are against raising taxes, which Universal Basic Income and other social nets are funded from, and also a wide cost of living range. It costs ~$2k/mo to rent a 1 bedroom apartment in my city. Where as in most of the USA that can get you a decent size house (3 beds, 2 baths or more). States can certainly enact it, though, and sometimes it is not funded by taxes nor is it paid out monthly, like Alaska.
I don't like the idea of automated cars at all. Before I learned how to drive, I was all about them but now that I have my license, I can't imagine not having the ability to drive it myself. Driving is just so fun. Obviously people will say that it will be my "choice" not to choose automated cars but the reality is that if they do become mainstream then I will be "forced" to use them instead. Insurance for an automated car will obviously be much less than a car driven by a human (less risk of $$ loss for insurance companies). Therefore even if the law doesn't force me to use an automated car, unless I get rich in the future, I will be forced to buy automated cars by insurance. I just hope they don't become mainstream until I die...
I also think network security will be a massive issue regarding automated infrastructure.
tr1p1ea wrote:
I also think network security will be a massive issue regarding automated infrastructure.


This is one of the reasons why I'm not totally liking the idea of self-driving cars. V2V networks are susceptible to computer hackings, which can result in a whole lot of yikes. While the various automated car manufacturers might be able to take precautions, there will (probably) always be some way to get around it. I don't mind the idea of driving my own car, but the idea of AI is pretty cool. Everybody just wants to surpass their known limits, and this is becoming increasingly true in the world of technology. This is not a bad thing, but there will always be someone who might misuse it (which is really the main problem). I'm not worried about the AI cars themselves, I'm worried about how people may react to it/use it.

I think that sort of makes sense Wink
Alex wrote:
Do you guys have any fears, hopes, or reservations when it comes to autonomy and/or even AI?


Yes. The day when AI completely rationally draws the conclusion that basically everything can be done more efficiently if human participation is minimized, preferably to zero.
CtrlEng wrote:
Alex wrote:
Do you guys have any fears, hopes, or reservations when it comes to autonomy and/or even AI?


Yes. The day when AI completely rationally draws the conclusion that basically everything can be done more efficiently if human participation is minimized, preferably to zero.


Well, that just got dark, but at this rate of progression in the technological world, it is quite feasible. But unless the cars become evil Transformers, I think we're good for now Wink
Battlesquid wrote:
But unless the cars become evil Transformers, I think we're good for now Wink


Well, they don't need to become evil, just amoral.

And it doesn't need to turn into a Terminator scenario, it could also end up in a Wall-E type dys-utopian world.

Anyway. Before it gets to that point, we will need to come up with some very good, honest and rational answers to the questions that AI might ask.
CtrlEng wrote:
Alex wrote:
Do you guys have any fears, hopes, or reservations when it comes to autonomy and/or even AI?


Yes. The day when AI completely rationally draws the conclusion that basically everything can be done more efficiently if human participation is minimized, preferably to zero.



I agree. I am afraid of a "Terminator Scenario", except there will be no winning, unless the AI is incapable of learning on its own. If you actually think about it, robots can do everything we can, but better, and faster too. There's a reason that CNC machining is still huge. When you can make 50 parts in an hour, when it would've taken a day to do so manually?

Yeah.

Personally, I think we shouldn't have an AI. After all, what if they turn hostile?
Caleb_J wrote:
Personally, I think we shouldn't have an AI. After all, what if they turn hostile?


Unfortunately, someone will eventually build one. Even if it's some kid in a garage (many decades from now).

And while a "Terminator scenario" might be possible, there are other, more rational scenarios. AI could decide to go where humans cannot easily or quickly follow (into space) - possibly self-destructing every reachable piece of technology once it has done so to slow down humanity's reaction or pursuit. By the time everything is rebuilt, AI has both a head start and a much faster expansion rate.

Might make an interesting novel.
tr1p1ea wrote:
I also think network security will be a massive issue regarding automated infrastructure.


Why would there have to be any infrastructure? Sure, it could make the process smoother. But all of the current technology relies on no infrastructure at all. The car is an independent unit that makes entirely its own decisions.

salehAce wrote:
I don't like the idea of automated cars at all. Before I learned how to drive, I was all about them but now that I have my license, I can't imagine not having the ability to drive it myself. Driving is just so fun. Obviously people will say that it will be my "choice" not to choose automated cars but the reality is that if they do become mainstream then I will be "forced" to use them instead. Insurance for an automated car will obviously be much less than a car driven by a human (less risk of $$ loss for insurance companies). Therefore even if the law doesn't force me to use an automated car, unless I get rich in the future, I will be forced to buy automated cars by insurance. I just hope they don't become mainstream until I die...


Car accidents are currently killing about 35,000 people each year in the US. They are the #1 cause of death among ages 15-24, and are in the top 5 for neighboring age groups [link]. They also cause over 2 million injuries requiring medical attention each year, and incur a total cost somewhere in the ballpark of $200 billion [link][link].

I don't care if you enjoy driving your car. Neither should lawmakers. At some point in the future, once the technology has been well-proven and ample time to cycle into new vehicles has been provided, you should be forced to use autonomous vehicles. You'll suck it up to save up to 35,000 lives, 2 million injuries, and $200 billion a year (realistically, these numbers will not reach zero, but could be significantly closer).
When did this forum turn into "Alex writes 5 paragraph essays from a prompt?"
allynfolksjr wrote:
When did this forum turn into "Alex writes 5 paragraph essays from a prompt?"
Right around the time he had the bright idea to try to more proactively engage our administrators by starting some discussion topics for them to get involved in. Seems to be working so far!
allynfolksjr wrote:
When did this forum turn into "Alex writes 5 paragraph essays from a prompt?"


If memory serves, we have always been encouraging of the 'quality over quantity' mantra. He's just being a good administrator (unlike a few of us) that tries to thoroughly explain or reply, and is setting the example for others to strive for.
salehAce wrote:
I don't like the idea of automated cars at all. Before I learned how to drive, I was all about them but now that I have my license, I can't imagine not having the ability to drive it myself. Driving is just so fun.


I'll never give up driving. I have a car that is a joy to drive and a joy to work on and upgrade. I love taking to the mountain roads; feeling the car hug the turn is such a satisfying experience.

tr1p1ea wrote:
I also think network security will be a massive issue regarding automated infrastructure.


Definitely. I don't think I'll ever feel safe in a vehicle with no human operable controls. I don't even like it when the controls for the climate control is adjusted via the infotainment - I prefer knobs and sliders - but it's something I'll have to relinquish control over eventually I guess. It's really nice when the car can start cooling off, or heating up, the interior before I get in.

All that said, I'm looking forward to owning a self driving car in the near future. My commute to and from work is 17 miles and it can easily take 90 minutes. I'm fortunate to work flexible hours and even remotely but it doesn't solve my issue.

Modern self driving cars certainly have the ability to control itself in stop and go traffic but they lack the ability to change lanes and merge onto other freeways. It's also not entirely legal. I would love to look at my e-mails and prioritize my work day while on the way to the office. I would love to leave work a bit early and take care of end of day e-mails on the commute home.

Additionally, we're human. Humans have personalities and these personalities cover a wide spectrum of selflessness to selfishness, empathy to apathy. But I think the biggest human trait of them all is our ego. When we're on the road, it's a giant arena where we all have to subconsciously negotiate with other vehicles and pedestrians on the road. We all have rules to follow and we enter this arena with the expectation that others will follow those rules. Except, we all don't follow them perfectly.

I attribute that to ego. I'm guilty of this myself as I'm sure every other driver is too. I caught myself a few years ago getting irrationally upset that a driver wanted to merge in front of me. "How much time could he lose if he just merged behind me? There's no room in front of me, he could have easily merged behind me but he had to merge in front of me."

Then, later on I found myself getting irrationally upset at other drivers not letting me merge. "Can't they see my blinker is on? This guy just doesn't want to let me in. He's not going to be inconvienecned if he let's me merge."

It wasn't until this happened a few times that it clicked. I was letting my ego control my thoughts and blaming the other drivers without correcting myself. I should have let that driver merge, I should have slowed down to merge behind the driver who was preventing me. Had I just stopped and negotiated with the other drivers I would have remained in a calm demeanor. So, that's how I drive now. I leave plenty of room to the car in front of me, I start merging into the lane I need to exit from miles before I need to so I have plenty of distance to negotiate all the lane merges I need to do.

Self driving cars remove that, they have no ego. They'll follow the rules of the road. They'll make room for cars to merge on and off the freeway in a manner that keeps the traffic flowing. Theoretically eliminating congestion. My commute could easily be reduced to 45-60 minutes instead of 90 (it's 30 with no traffic).

I'd love to hop on the Tesla bandwagon but I'm going to wait to see what other manufacturers produce in the coming years. I intend to have an electric autonomous car by 2020 though.
  
Register to Join the Conversation
Have your own thoughts to add to this or any other topic? Want to ask a question, offer a suggestion, share your own programs and projects, upload a file to the file archives, get help with calculator and computer programming, or simply chat with like-minded coders and tech and calculator enthusiasts via the site-wide AJAX SAX widget? Registration for a free Cemetech account only takes a minute.

» Go to Registration page
Page 1 of 1
» All times are UTC - 5 Hours
 
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum

 

Advertisement