CMU Summit: AI Panel — how I got into AI, my thoughts on the future and challenges in AI, and things I learned starting a startup

in #technology6 years ago (edited)

(originally published on Medium)

“There are two kinds of people in AI: the magicians, and the builders.” — Professor Eric Xing at CMU Summit

I am a builder, and I love making magical AI things happen. I always wanted to share the story of how I got into AI, my thoughts on the future and challenges in AI, and things I learned by starting a startup that I want to share with other AI people who are thinking of turning their side project into a startup. I got the opportunity on the AI Panel at CMU Summit, the day after Blockchain Panel.

From right: Professor Lenore Blum (moderator, CMU), Aaron Li (me, Qokka.ai), Qirong Ho (co-founder, Petuum), Jeff Schneider (Professor @ CMU, Senior Engineering Manager @ Uber Advanced Tech Center), Ashish Aggarwal (Principal, Grishin Robotics)


Q: Tell us about yourself, and the story of how you begun working in AI

A: Hi, my name is Aaron Li, the founder of Qokka. We build information summarization and visualization systems based on machine learning and NLP. We build platforms for specific verticals, which we hope that we could help people reduce the information load. So far, we already launched a platform that summarizes all Amazon product reviews, for over 8M+ products and 80M+ reviews. Now, we are building an automated hub for crypto info, which learns crypto related data from the entire Internet, for each of thousands of cryptos.

I worked as an engineer in machine learning at Google Research, on core machine learning systems, and NICTA, Australia’s national CS lab, now part of CSIRO. I also worked as a lead engineer in founding team of Scaled Inference, a startup backed by Vinod Khosla, Michael Jordan, and David Wallerstein. I am a CMU alumni graduated from Language Technologies Institute, part of School of Computer Science.

Unlike most people, my journey into AI begun as a first-year university dropout. When I was 18, I left New Zealand, went back to Beijing, and started my first business: a online-game t-shirt design and retail website. I built a website in Flash that simulated in-game graphics and interactive purchase experience, so that people can buy t-shirts with virtual gold instead money. With some help from friends and online affiliates, this unique concept (back then) really hit the infant-stage Chinese e-commerce market. In the first few months, I got hundreds of orders each month, forcing me spending all my time to improve supply chain and logistic management.

However, even with opportunities unravelling, I still felt deeply unsatisfied. I knew in my heart I am not happy selling t-shirts or doing e-commerce, but couldn’t figure out exactly why. Until one day, I stumbled upon a futuristic promotion video from Microsoft Research, and it explained to me many of the possibilities for technologies in the future: transparent touchscreen which people can carry around and explore all kinds of information, virtual assistant AI that takes care of our everyday life, automated cars and offices that talks to humans and actively optimize themselves to fit users’ needs… The key difference to science fictions is, Microsoft showed how we can make these things in the future. That’s when I knew exactly what I wanted to do in life.

I returned to school and asked a friend to take over the business. I transferred to a university (ANU) where I had the best chance to learn as quickly as I could and get into research. Things feel completely different this time, when I have a clear purpose. And I am very grateful that professors like Marcus Hutter (who’s first PhD student was Shane Legg, the co-founder of DeepMind) allowed me jumping directly to PhD level AI and machine learning courses, when it was only my second year in university. After I am done with that course, he introduced me to the general reinforcement learning paper reading group, which really enlightened me and kick started my journey into AI.

That’s how I got started. From that point onward, AI and machine learning have always been the core of whatever I do in academia or industry, evening with mining Bitcoin or building a platform for cryptos.


Q: Can you predict where AI will be in a few years? What challenges do we face to get there?

I see two major obstacles before AI can be everywhere in our life: adoption, and fundamental algorithms.

Adoption

Today we have AI tools and algorithms ready for many great things. For example, we can transform everyday objects into AI powered IoT devices: fridges, ovens, cars, door locks, cameras, even the mic I am holding right now. It is feasible to give these devices the ability to automatically adapt to different users and environments, even act on their own intelligently.

They may sound like science fiction a decade ago, but they can be the reality in a few years, for example:

  • a mic that makes me a great singer or public speaker, by automatically adjust the pitch, pace, and volume of my voice based on audience attention and positions;
  • a car that automatically warms up, sets up the air conditioner and the navigation system for my destination, right when I stepped out the door in my house;
  • an oven that automatically cook the food to my best preference, based on computer vision (like June Oven), information shared by the fridge, and my eating histories.

These things can all be done with AI tools and algorithms we already built. It is not because we don’t have good enough technologies to make these things happen today. It is rather because we don’t have enough people, time, resources, and enough motivations, to make them happen. Device manufactures are facing great challenges to adopt and integrate AI with their products.

  • We always have a sheer shortage of engineers, product designers, and domain experts who really understand AI. Almost every one of these people work at either big tech companies or startups. This leaves almost no one to work on these cool things in traditional manufactures, where everyday devices are truly built.
  • AI tools are not that easy to learn or use, and they certainly require years of prior training and experience in the field to use in production systems. To make things even worse, most AI tools are developed and managed by big tech companies, driven by their business and strategic needs. Community open source projects are scarce, hard to use, and difficult to learn.
  • People still don’t trust AI. When AI systems are in action, we become nervous. We are always forced to prepare for uncertainty and unexpected behaviour, because it is almost expected that something will go wrong (for example, Siri, Tesla auto pilot). Even tech people themselves don’t always understand how AI works. Most AI tools and systems work like a black box. When something goes wrong, it is hard to explain why, thereby making it even harder to identify the issue and prevent it from happening again in the future.

Fundamental algorithms

Most AI systems of today couldn’t work without huge amount of data. They took advantage of the explosion of information over the last decade, and rely on the assumption that there is never a shortage of data. We often hear AI startups say “if only we had more data… we could do a lot better”, especially among those companies built on top of data-hungry algorithms and frameworks such as deep neural nets (deep learning).

But the reality is even though we get exponentially more data every day, it does not proportionally translate to performance improvement for majority of the algorithms. And almost all of them hit a ceiling in performance after the amount of data reaches certain threshold. Yet for many practical use cases of AI, the performance is still not enough. Moreover, regardless of the amount of data we have, there are fundamental inference problems in machine learning we are yet to solve to reach the next level of machine intelligence, as Turning Award winner Judy Pearl published in his recent article “Theoretical Impediments to Machine Learning With Seven Sparks from the Causal Revolution”.

Instead of solving major, fundamental problems in that article, I would set a lower expectation for the future. I hope we could solve a smaller problem within a few years: how can machines learn more efficiently with far less data? As of now, it is not a hot topic in either academia or industry, but it is something we definitely need in the future. We might need something that’s fundamentally different from deep neural nets.

Some companies have made amazing progress in this area, such as Scaled Inference. They seem to be the first company building a platform that enables “self-optimizing software”, that is supposed to do better than traditional techniques such as A/B testing in “far less time and no extra cost”. In other words, far less data.


Q: What advice would you give to the audience who are interested in AI and startups, especially the students? Do you have any inspiring story to share?

There are only two things I want to say:

  • It is worthwhile to learn the fundamentals and understand all the math. Don’t just focus on tools and spend all day learning how to use them. For example, it is not that useful to figure out 100 different ways to use TensorFlow, because it might not be even relevant next year.

    By understanding the fundamentals, you will have much better chance to come up with original solution to solve real problems. In AI, especially in startup spaces, it is not productive to blindly follow whatever everyone else is doing. Given the amount of hype and investment in this space, it can be very likely that you could be just following a magic bubble that will eventually pop.
     
  • For AI startups, it is unlikely that you can just one day “come up with a great idea, build an app prototype, get acquired within months and become rich”. The good news is the accumulation of knowledge and skills in this space is well rewarded.

    It could take months, even years, to flush out an idea into a startup, and even longer to build a real product. In my case, I had some of the initial idea of Qokka back in 2012–2013, built some initial prototype in 2014-2015, and eventually set up a startup doing it in 2017. After I left my previous full time work, it took me 4 months to get committed funding from investors I knew, and more than 8 months from investors I had never met before.

    So if you have some idea or side projects that you believe are unique and very useful to people, don’t be discouraged if it takes a long time for others to realize value of your work.
Sort:  

Wow this is a great piece. I've been particularly interested in AI, from my early days as a physics student.,but it seemed a difficult choice. But your article has given me a breakdown. It's been awesome reading this. Thank you @aaronqli

Resteem bot Service! Promote Your New Post.Find New Followers - Upvotes. Send 0.400 SBD to @stoneboy and your post url in memo and we will resteem your post to 9500+ followers from two different account.@stoneboy and @vimal-gautam.

Congratulations @aaronqli! You have completed some achievement on Steemit and have been rewarded with new badge(s) :

Award for the number of upvotes received

Click on any badge to view your own Board of Honor on SteemitBoard.

To support your work, I also upvoted your post!
For more information about SteemitBoard, click here

If you no longer want to receive notifications, reply to this comment with the word STOP

Upvote this notification to help all Steemit users. Learn why here!

These is really a inspired article. As I am an Electrical Engineering student I am really grateful to read the article. ..... Thank you...

@aaronqli you were flagged by a worthless gang of trolls, so, I gave you an upvote to counteract it! Enjoy!!

Coin Marketplace

STEEM 0.15
TRX 0.16
JST 0.028
BTC 67628.32
ETH 2424.36
USDT 1.00
SBD 2.35