GTC was notably smaller and less populated than CES, but I expected as much despite GTC attendance doubling since 2012. By 2pm the exhibition floor was not nearly as crowded as it was at 11a after the keynote. Thankfully, unlike CES16 I was able to try an Oculus Rift in 45 minutes, I booked my appointment then walked around the floor and had lunch. When it was close to my appointment time I got in the line and they set me up. They had 8 booths, 4 Oculus and 4 Vive and since some of the Oculus experiences were seated they had a few more Oculus slots per day than the Vive. I played EVE: Valkyrie and it definitely took some getting use to.

Unlike PSVR at CES I did get a little sense of motion sickness. Now, I'm not one to get motion sickness but I attribute this case to the fact that I was turning my head and piloting the space ship all while planted in a chair. I should have been all well and good, I'm sitting down in the cockpit. However, I'm pretty sure the only times I felt a bit dizzy was when I was both looking around and maneuvering. It also took me a while to remember that I could look around. I'd look around but just small head movements, about halfway through my time I realized I could look farther left, right, up and, down. I managed to get 1 kill through both my games.

There was a cool service that I saw. To set some background, I do occasionally browse imgur and stumbled across this album on the front page of neural network images. I quickly registered on the site and set some images up to be processed. About 3 hours later the site redirected to their Facebook page with a post that read that the site was seeing higher than expected volume from outside countries. All images in the queue were purged and access was (now) only permitted from within Russia. I cried. Ended up finding the source on Github, because it was open source. Turns out, there's another website that does this stuff! http://deepart.io. The wait time to have an image processed is 6 hours so I'll share my results tomorrow but the booth were doing them in 15 minutes. I could pay a couple euros for a 15 minute turn around but I don't mind waiting.

This brings me into the keynote. I wouldn't have a lot to write about if I didn't get into the keynote. I had a seat right up front behind the rows of press. It started a few minutes after 9am but while we waited there was a great screen saver of sorts on the stage screen. That was fun to watch for about half an hour Very Happy Once underway Jen-Hsun, the CEO and co-Founder of NVIDIA, mentioned the 5 areas he'd be speaking on today: The NVIDIA SDK, Tesla P100, Iray VR, Autonomous Cars plus HD Mapping and, the DGX-1. I didn't take notes during this whole thing and decided to take photos on my phone pretty late through the keynote so I'll only touch on the stuff I remember.

Autonomous Cars and HD Mapping
They actually announced some of this stuff at CES16 but I missed it. Well, I saw their exhibit. I saw the car. I just didn't see anything to read or really know what was going on and all the representatives were busy so I carried on. But, they announced the PX-2 at CES16. And today they showed what it's capable of. It supports up to 12 cameras and takes in all that data and a point cloud of the scenery and uploads that point cloud to servers where it's processed. So rather than continuously uploading 12 HD video streams, you have a highly compressed point cloud. This isn't to directly control the car but to collect map data of the roads which will allow the cars to have super concise and accurate data about lane markers, signs, exits and, much more.

As far as autonomous driving goes NVIDA is fully committed to deep learning and neural networks, announcing the DGX-1 and P100 as well as the supporting SDKs. In fact, if you want to read up on how neural networks can help a car learn to drive Bloomberg ran a great article on George Hotz, aka geohotz, and his 2016 Acura. It's worth a read but in summary George bought a 2016 Acura which has no autonomous driving capabilities. His goal is to make the software and cameras available for $1000. No word on the additional supporting hardware though. Just seriously read it though.

Cemetech Plug: We have a discussion topic right now about autonomous driving! *cough cough* Wink

DGX-1
And that's all I feel comfortable touching on, directly. Although indirectly I want to touch on the DGX-1. There was so much talk about how this will make deep learning much easier. What is Deep Learning? Deep Learning is what computers use to derive results from data. You could, in essence, give the computer thousands of images of a certain type of painting style then ask it to paint a scene. You could give the computer hundreds to thousands of images of elephants then it'll be able to pick out elephants in pictures. A great example, actually, was announced yesterday. Facebook has made it easier for blind users to navigate and interact. Facebook now "speaks" image contents to the disabled (or I'm sure anyone if that option is (ironically?) enabled. I think this article by The Verge best describes it but basically these computers have learned what objects are and can differentiate that. In The Verge's example Facebook has found pizza, sadly doesn't say what kind, in one image and two people smiling in front of the ocean with the sky in another. That Deepart service I mentioned above? Uses Neural Networks and Deep Learning to redraw the base/submitted photo from a style photo.

Deep Learning is incredibly useful for images. Google uses it for their image results. When the service first started, it very likely relied on embedded keywords or words on the page the photo came from. Now, Googles Deep Learning systems can automatically determine what's in each photo. Deep learning is also useful for robots. Rather than programming how to do a task they tell the robot the basics - square peg through square hole - and let the robot learn over hundreds of tries. If you've been keeping up on AlphaGo, the machine utilized Deep Learning to learn how to play Go and beat a human expert. The programmers didn't program the machine how to play Go. Instead they taught it the basics and the machine learned by playing itself over and over and over again, each time learning new strategies. In fact, AlphaGo did a move that completely baffled the human player during one of the matches because it wasn't a logical... Only that move was an incredibly calculated move. Seriously, read up on that too.

Deep Learning and Autonomous Driving
This was another point NVIDIA touched on. They are building 2 autonomous race cars and entering them into the Formula-E ROBORACE. There will be 10 teams with 2 cars each. Each car will use the same brain/computer to make a level playing field, the aforementioned PX-2, but what can be different is how that information is processed. NVIDIA will be using their technology while other companies/teams will use theirs. Racing is a great place for this tech to take off; racing leagues have brought many technologies to modern cars.

2016 looks to be a great year!
I'm on a roll with these double posts guys but I didn't want to write my opinion on something under the same post as the event. I figured a reply to the topic was the best way to separate the two.

I'm planning to buy a house in the next few years and was really interested in the Internet of Things, or IoT, presented at CES. But one of my major turn offs about these smart home things is that it all relies on an outside service. Those security cameras that alert you if someone breaks in, or if it detects motion? Relies on the servers of the camera provider to analyze that stream. Okay, sure, motion detection is an incredibly easy thing to do and is done right on the camera but those services notify you. Those services store your triggered video data on their servers. Heck, Nest is shutting down a smart home service and it's affecting people. I vehemently do not want to be tied to a service. I want to buy a product not a service. That's the reason why I haven't bought any smart home accessories yet; though the only thing I could really start with is Phillips Hue.

During the GTC keynote I had the idea. With all this emphasis on deep learning and processing of data it should be an eventual goal to get these Deep Learning machines inside a household. Let them process the video data, let them run the smart house. Then my idea took off and I stopped paying attention to the keynote for a bit. (Oops?)

I would be perfectly okay in a world where my home is aware of everything I do: parsing my texts and e-mails, my phone calls and voicemails and, anything else I can throw at it. If I call a friend and say "Hey, can you take my dog to the park at 2pm?" and they reply "Yes!", whether it's text or voice, I would want my home to know. One: I'm likely out of the house and it would use my phones location to determine that. Two: It would learn who my friends are over time, such as their vehicles and faces. Three: If it recognized my friend pulling up to my house it could unlock the front door to my house for them. If it was 2pm and someone else than the friend I talked to rolled up to my house, then the system wouldn't unlock the front door. It wouldn't grant them that access.

That's an example. The house could also be aware of package deliveries that are pending and alert me when a package is delivered. It could then also monitor that package on my front porch and if someone takes it away the house can alert me and notify authorities. If I have connected outdoor speakers it could even broadcast an audible message.

I wouldn't mind connecting to a VPN set up with my router so my smart home could communicate with my home instead of through multiple services. I'm a committed Apple user but I would drop Apple products overnight if I could do this with Android or Windows Phone: A complete in-home, "offline" solution to a smart home. My home would know my alarms and raise the temperature and lights so things are cozy when I wake up or by the time I come home. Sure, you can do that now but each service needs to be aware of your desired wake up time or when you are on your way home. I'm not talking IFTTT. I'm talking about a computer in the home that processes and controls all the smart home systems without connecting to an outside service. So if Company A goes out of business or shuts down, much like the linked article above, my stuff still works.
My understanding of Apple's HomeKit is that everything, bar voice processing for Siri commands, happens in-home, unless you explicitly configure remote access through your Apple TV.


On the deep-learning front, it should be mentioned that most people use that terminology to refer to convolutional neural nets, which are great at learning to recognize / classify patterns, but not necessarily good at generating policies (a different neural net architecture, called recurrent neural nets, can be used for predicting "what should come next"). AlphaGo in particular used deep learning combined with another technique called reinforcement learning to generate its strategies.
elfprince13 wrote:
My understanding of Apple's HomeKit is that everything, bar voice processing for Siri commands, happens in-home


To the best of my knowledge that's the case too. But we still have services like IFTTT. So if I'm not home and someone rings my smart doorbell, the service checks to see where I am, see's I'm not home and sends me a picture of the person at the door. If I was home no notification would be sent. I think? There's no interconnectivity of the smart gadgets on the same network.

I haven't looked that much into HomeKit but if I were to say something like "Hey Siri, close up my house." I'd expect HomeKit to recognize there are multiple components to this. Close my smart shutters/blinds, turn off smart interior lights, make sure the smart deadbolt on the front door is locked. And if I have any smart appliances like a stove and oven, ensure those are off too.

If I were to say "Hey Siri, I'll be home in about an hour." I would expect HomeKit to recognize it'll be dark so to turn on the exterior lights and interior lights by the door, heat my house up to my preferred level, start the laundry in the washer perhaps and the dishwasher. If it is a cold night maybe I threw a blanket in the dryer and set that to run on my command.

As I understand, I'd have to tell Siri to turn on my hallway light. Heat my house up. Start my washer. Etc etc. All individually because each component uses a different hub and communication network within my home.

But I want things to go a step beyond that. I don't want to have to tell my phone. I want my phone and car to talk to each other. I just installed a CarPlay radio in my car. When I open Apple Maps there's already a destination selected - often work - because of either my scheduling in the calendar app or because it's in my frequently visited locations. I was heading to work one morning, during the honeymoon phase with this radio, and Apple Maps had my route on the screen and giving me directions. It wanted me to take an exit on the Freeway but I purposely took the next exit because it was a more direct route and the exit I take every morning. When I missed the designated exit it wanted me to take Apple Maps automatically rerouted me to my next frequent destination - which was about 50 miles away.

It assumed since I didn't take that exit I must be going to another location. I would want my phone to talk to my car - or maybe just entirely the car - and think "Hey, it's 8pm on a Thursday and he's away from home. He's usually home by 7p. We see that he's driving so let's route him home and get his house prepped for his arrival." The car would connect, presumably through a VPN, to my home network communicate my arrival time. And if not the car then the phone. Jen-Hsun touched on the fact that we are getting to the future where we'll be able to talk to our cars; they'll always be watching and listening. Going off on a tangent here, I could be in the passenger seat and point while saying "We need to go that way." The driver could say something acknowledging like "Okay." then the car would process that communication and data then proceed to either reroute the navigation or itself. Because it saw my gesture and heard the communication, proceeded to interpret that information and acted on it.

Quote:
On the deep-learning front, it should be mentioned that most people use that terminology to refer to convolutional neural nets, which are great at learning to recognize / classify patterns, but not necessarily good at generating policies (a different neural net architecture, called recurrent neural nets, can be used for predicting "what should come next").


They did touch on RNNs. NVIDIA even has cuDNN 5 which I would imagine supports RNN as well. This isn't an area I'm knowledgable in at all. Laughing
Quote:
As I understand, I'd have to tell Siri to turn on my hallway light. Heat my house up. Start my washer. Etc etc. All individually because each component uses a different hub and communication network within my home.

This isn't actually true - HomeKit allows you to bundle groups of actions into single commands. That said, I agree that learning the correct behavior is a fascinating next step.
elfprince13 wrote:
Quote:
As I understand, I'd have to tell Siri to turn on my hallway light. Heat my house up. Start my washer. Etc etc. All individually because each component uses a different hub and communication network within my home.

This isn't actually true - HomeKit allows you to bundle groups of actions into single commands. That said, I agree that learning the correct behavior is a fascinating next step.


Interesting, I thought groups could only be done within the same environment. Such as tell Siri to turn off the ambient lights and then Phillips Hue will turn off all the lights you have put in the "ambient light" group. Since there isn't a dedicated HomeKit app on my home screen, I figured each app could only interact with it's own accessories within HomeKit. I'll have to do some proper research into HomeKit it seems.
comicIDIOT wrote:
There was a cool service that I saw. To set some background, I do occasionally browse imgur and stumbled across this album on the front page of neural network images. I quickly registered on the site and set some images up to be processed. About 3 hours later the site redirected to their Facebook page with a post that read that the site was seeing higher than expected volume from outside countries. All images in the queue were purged and access was (now) only permitted from within Russia. I cried. Ended up finding the source on Github, because it was open source. Turns out, there's another website that does this stuff! http://deepart.io. The wait time to have an image processed is 6 hours so I'll share my results tomorrow but the booth were doing them in 15 minutes. I could pay a couple euros for a 15 minute turn around but I don't mind waiting.


I'm still trying to figure out how this work. It seems to work best when the original photo and the style source are visually similar. Such as with my Golden Gate Bridge photo and the photo of my friend smelling a flower. While the photos of my 3D Printed Self are okay and the couples photo of my two friends are less than stellar; it is hard to find a style source that matches/works well with it. I think I'll try it with the fourth images style actually. Try it yourself with photos of your own over at http://deepart.io

Result | Original | Style Source


Result | Original | Style Source


Result | Original | Style Source


Larger versions of some of them:





You said something about piloting spaceships; was that just a demo thing or were you by chance playing vendetta-online? I know the devs at guild software have been working on oculus support left and right.
comicIDIOT wrote:
They had 8 booths, 4 Oculus and 4 Vive and since some of the Oculus experiences were seated they had a few more Oculus slots per day than the Vive. I played EVE: Valkyrie and it definitely took some getting use to.


It was just a Demo. But there was also Everest, Mars, Tatooine Star Wars thing, Bullet Time, IRay Demo, and a few more. No Vendetta-Online demos; I think everything was local as I was playing against AI in EVE.
  
Register to Join the Conversation
Have your own thoughts to add to this or any other topic? Want to ask a question, offer a suggestion, share your own programs and projects, upload a file to the file archives, get help with calculator and computer programming, or simply chat with like-minded coders and tech and calculator enthusiasts via the site-wide AJAX SAX widget? Registration for a free Cemetech account only takes a minute.

» Go to Registration page
Page 1 of 1
» All times are UTC - 5 Hours
 
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum

 

Advertisement