I went to a developers conference last week (IBM Index), my first ever. I’ve been to **many** academic events, and a few commercial events, but never before to a developer event. Generally I was definitely impressed. Some really good questions after my talk (my favourite was “who is the Edward Tufte of linguistic summarisation of data?”). I also really enjoyed the keynote address by Hightower on his satirical Nocode Github project (“The best way to write secure and reliable applications. Write nothing; deploy nowhere.”). Made me laugh and also made my think.
There were some odd moments, though, for example talking to someone who was super-passionate and excited about the Jenkins tool for building software. We use Jenkins at Arria, but I find it very hard to imagine getting passionate and excited about it. But maybe this is just me being narrow-minded…
Sensible Attitude Towards Deep Learning
One thing that really impressed me was the sensible attitude towards deep learning and other trendy AI technologies. The academic community (as always) is extremely trendy; dubious papers about deep learning get accepted to good venues, whilst non-DL papers struggle to get a hearing (I recently peer-reviewed a paper which used a non-trendy form of machine learning, and one of the other reviewers said he had no interest in anything other than DL-type approaches). Commercial AI events are worse, with many people following and parroting the latest hype from the AI gurus, without seriously trying to understand what is actually going on.
At the developers conference, in contrast, everyone I talked to took a very sensible engineering attitude towards deep learning and other new AI technology; very interesting and exciting, but at the end of the day tools which would work in some contexts and not in others. Developers also instantly appreciated concerns and issues that I sometimes have trouble getting academics to take seriously, such as lack of large training corpus, importance of guaranteeing minimum level of performance, and need to allow clients to tweak behaviour. All of this made perfect sense to devs and resonated with their own experiences. Whereas many academics assume that corpora are always available (at a keynote in a major academic NLP conf a few years ago, the speaker expressed her astonishment at discovering that you didnt have large corpora for many real-world NLP apps); that only average case performance matters (since this is how 99% of academic evaluations are done); and that there is no need to adjust and update systems based on non-corpus information such as new government regulations.
Talks from Arria
I also wanted to mention that Arria had several presentations at the workshop, which are all publicly available; ie for once there are some decent technical presentations about Arria which anyone can read. These include
- My 40-min presentation on “Discover how to include natural language generation in your applications“. I’m not sure how much sense the slides make on their own, maybe I’ll try to get Arria to record a video of the presentation.
- A 2.5 hour hands-on workshop on using Arria and IBM services to build a tool to summarise an investment portfolio. Interested people should be able to do this on their own, although they may need to provide a credit-card number (but not pay anything) in order to access the IBM services used in the workshop.
- A 10-minute talk on integrating Arria into an Alexa service. There is some code available on Github, but may not make much sense without more context.