5 Insights From the Bleeding Edge of Technology and Open Source
Oct 28, 2016
No, that wasn’t an IT Crowd-themed Halloween party you saw taking over downtown this week, Raleighites. It was the All Things Open conference (#ATO2016), an annual meeting of developers and IT professionals “exploring open source, open tech, and the open web in enterprise.” This year’s breakout sessions featured a track for nerds like me who are interested in machine learning and artificial intelligence (“ML” and “AL” for the initiated).
— Jared Brickman (@JaredBrickman) October 26, 2016
I spent the day learning how machines learn, and here are my top five takeaways:
Machine learning will level-up businesses’ ability to gain value from data (and we’re just getting started).
In her session, Dr. Zeydy Ortiz, PhD shared a hot new finding from Gartner: while business leaders feel that their organizations have gotten good at storing and organizing data, they also feel there’s more work to do before they’re optimally turning data into dollars.
To hear Dr. Ortiz tell it, these self-effacing business leaders are somewhere on the journey to prescriptive data, and haven’t made it yet. To illustrate the quest, she shared this framework:
— Jared Brickman (@JaredBrickman) October 27, 2016
If your data is descriptive, you can interpret what happened. If it’s predictive, you can use it to decide what to do next. But, you haven’t reached data-value nirvana until your analytics are telling you what to do next. That’s prescriptive data, a fast-track from raw information to dollar-driving, actionable insights that cut out messy and time-consuming analysis and decision making.
And the road to prescriptive data outputs is paved by machines that learn.
In building machine learning algorithms, even the world’s most resource-rich technology firms are turning to open source communities for help.
In his session, Phillip Rhodes of Fogbeam Labs gave us the quick and skinny on 2016’s chart-topping machine learning algo’s. They’re coming from giants like Google, Facebook, and Baidu, whom, despite their access to cash and coders, are collaborating with the open source community to define the future of ML. What’s striking about this approach is how it differs from the way they’ve developed their “bread and butter” algorithms (i.e. Google Search, Facebook’s Newsfeed Algorithm): in the blackest of black boxes.
With a pursuit as aspirational as augmenting human thought with machine intelligence, even giants need a community.
But, the community still hasn’t agreed on how to define the relationship between machine learning and artificial intelligence.
Throughout the conference, some speakers called machine learning an evolution of artificial intelligence. Others called it a “subordinate practice” of AI. Yet others still said the two fields were entirely separate practices.
Can somebody please task their cognitive computers with figuring out the meaning of machine learning?
“Thought vectors” can help us figure out the meaning of the phrase “machine learning” and so much more.
Since our community of humans can’t agree on semantics, let’s enlist machines to help.
Buckle your seatbelts, Marty, because this baby’s about to hit 88 miles per hour…
On GitHub earlier this year, Facebook released FastText—a natural language processing algorithm for “fast text representation and classification.” It works with “word vectors,” which are produced when computers convert words into a sequence of numbers. Once converted to vectors, terms can be analyzed within a broader text. Computers do this to determine the interrelationships of the vectorized words in the text, ultimately deriving the “meaning” or “intent” of each term. This is miles beyond how text information has been traditionally treated by computers: to them, objects of “unknown” meaning that are stored, modified, and displayed based on human-defined rules (programming).
Stick with me here: thought vectors take the concept of word vectors to the next level! They are “vectorized” combinations of multiple words that represent a complete thought or concept. Again, these vectors (“thoughts”) are analyzed in the context in which they appear, where computers come to understand their “meaning” or “intent.” Of anything that has come before, this most closely emulates human thought: processing the meaning of ideas, free of the confines of language. In cognitive computing, it will drive greater speed, depth of analysis, and richness of output.
As advanced technologies like “thought vectors” are incorporated into sophisticated business-value driving solutions, we need to communicate about them in a clear and compelling way.
ThingWorx combines the Internet of Things (IoT), machine learning, and augmented reality into a single platform that helps engineers monitor complex industrial systems (like water pumps or an oil well). That’s a lot of tech (on top of tech ?), and it can be easy to get lost in the details. But, #ATO2016 speaker Greg Urban shared a video that helped me understand the use case within a minute:
In the video, an end user narrates their real-world use case over real-world video, with just enough animation to get a sense of how product provides value to the work. That’s the kind of efficient and effective content we endeavor to produce here at Centerline, too.
Now, where was this AR-enabled iPad + dump truck when I was six years old “playing construction worker” in the sandbox?
Thoughts? Would love to hear from you – I’m @JaredBrickman.