Really the next big thing: SAS Global Forum Day 3
Two years ago, I said that data visualization was the next big thing. I also said that people would stick with SAS because it was easier to use and there are more people who DON’T want to be programmers than do.
Fast forward to 2012 and Visual Analytics is all over the place. SAS On-Demand, the free SAS offering to academics has made a rapid improvement from slower than The Spoiled One cleaning her room to actually usable. In fact, I’m giving a Hands-on Workshop on using SAS Enterprise Guide with SAS On-Demand on Tuesday at 1:30. Show up.
Unless you already know a ton about SAS Enterprise, in which case I’d go to Anders Milhoj presentation on time series. He’s from the University of Copenhagen and a pretty good speaker. He tells me that he is even funnier in Danish. I’ll have to take his word on that.
My point – of which I occasionally have one – is this: You should really pay attention to me this time when I tell you what the next big thing is. Not only is my previous prediction evidence of my prediction making ability, but I also came up to the room at 10 and did work while the rest of you people were still drinking. Tomorrow, you will be hung over and I will be richer. So there.
Ready – here it is – big data and cloud computing.
What? No fan fare? Are you thinking that is old hat? Are you saying that is nothing but going backwards to the days of main frames and when time share referred to computer time and not condominiums in Florida (anyone remember TSO?)
Half of you are thinking that and you are wrong. The other half of you are still wondering what the hell is TSO.
When I was in high school, the school got a couple of hours a semester of computer time for our inner city school students (i.e. me, my friend Michael and maybe one or two other people) to have a programming class. Computer time was EXPENSIVE. We wrote our programs out by hand. Made sure everything was right and then someone more trustworthy than me was allowed to key in the programs on punched cards. (They were right not to trust me, too. You have NO idea, how right they were.) The next day, someone would go back to the university and pick up our output.
Here is the difference, and Paul Kent and Gary King at the SAS Executive Conference touched on it very well …. the difference is that the cost and time to get that output is quickly becoming a non-issue. Kent said we have only just scratched the surface of what can be done with big data, and he is right.
It’s almost 38 years ago since I was a student at Logos High School and 37 years since I saw my first personal computer that a guy in the dorm across from me freshman year made out of a kit. There were no apps then. Now there are over a million apps.
You know why there are a million more apps now? Because people have computers. Not just professors at universities, programming staff at enormous corporations and a few very sketchy high school kids, but millions and millions of people.
Well, maybe it’s kind of like a million monkeys at a million typewriters coming up with Shakespeare over a million years.
You see, there are literally millions more computers available compared to when I was young and they are running a million times faster. Anybody remember the Atari 64? No, I didn’t think so.
Now, extrapolate that to big data. I can think of some really cool work I would like to do if I had access to a high performance computer. I have even considered hacking into one, but my aversion to the hours they keep in prison has prevented it. What if getting time to run your job on a thousand processors was really cheap, say not much more than AOL used to be (9.99 a month for life, I believe, if you were one of the first subscribers).
If you had millions of people who had access to high-speed computers, with a program that automated parallel processing (for example, SAS), all you’d need to do is make it easy to process that data.
I’ve written many times before about how open data is a good idea but it is not as simple as running a million correlations and pulling out the 50,000 that are significant and throwing them all into a stepwise regression equation as independent variables.
(Those of you who are non-statisticians are thinking, “Huh, that sounds pretty good to me.” While the statisticians are wishing they were dead so they had graves to roll over in at the very thought.)
To keep on with the analogy with my high school days, imagine how many fewer apps we would have if they had to be written in Assembly language. We actually thought Fortran and BASIC were pretty nifty, new ideas.
Can you imagine writing an app with either of those? No, me, neither.
Big data (and its “software”, data visualization) is pretty much in the embryonic stages that hardware and software were when I was in high school. Looking at the explosion there has been in desktop computing and other ways unimaginable back then, I now find it very easy to imagine that we are going to see a similar explosion in the next few years.
I am going to link back here then and say, “I told you so.”
Another thing I will have told you …. SAS is exploring the possibility of a SAS that runs in the browser. You would not have to install SAS on your desktop. If you want to hear about it, you should meet up with Amy Peters at 6 pm on Tuesday somewhere in the Dolphin. If you paid attention to the last 972 words, it should be obvious why.
If you are at SAS Global Forum you can find the meet-up sign-up list on the bulletin boards across from the registration that will give you the room. There are usually other interesting things posted there, too, so if you haven’t gone and looked, you should.
Very interesting post! Makes me excited about the future of data analysis, and computing in general!