This past week marked the highly anticipated Embedded Vision Summit, a gathering highlighting the latest advancements in the realm of (edge) vision and AI applications.
We were delighted to be part of the show and had amazing discussions with software engineers building new edge AI and vision applications, looking to scale them to a global level - exactly what the Nx Toolkit helps them to accomplish. We also engaged with visionaries pushing the boundaries of new edge AI hardware and algorithms. Now that the dust has settled a bit, let’s take a short look at the learnings from the summit and our next steps.
Network Optix was present at the summit in multiple ways. We had an impressive booth on the exhibition floor, showcasing a myriad of our capabilities:
In addition to the activities and demos at the booth, we also had the opportunity to give talks during EVS. On Wednesday morning, I had the honor to give a talk called “Scaling Vision-Based Edge AI Solutions: From Prototype to Global Deployment,” detailing the challenges the field faces in jointly growing the market and moving beyond proof of concepts. The talk also introduced OAAX (see below). Later on Wednesday, Robin van Emden provided a live coding session called “Building and Scaling AI Applications with the Nx AI Manager,” which was hands-on and developer-focused, demonstrating how to use the Nx Toolkit to build on Nx EVOS. Finally, on Thursday, Network Optix’s CEO Nathan Wheeler gave a talk called “Nx EVOS: A New Enterprise Operating System for Video and Visual AI,” detailing the history of EVOS and the necessity of having an operating system for your video across all verticals.
Besides our own presence at the summit, there was a packed program of presentations, booths, and talks. There was a lot of discussion about the opportunities of generative AI, novel transformer networks, and multimodal “generalist” networks. Notably, on Wednesday, Yong Jae Lee gave an amazing talk on the opportunities of multimodal generalist models, highlighting the future of integrated language, audio, and video capabilities, “generic” learning, and out-of-the-box semantic integration. Multimodal LLMs were a big part of the technical buzz at the show, with a panel session on the topic hosted on Thursday.
Through the discussions of generative AI, novel architectures, and novel hardware, there was a strong focus on making AI and vision applications work at the edge. The exhibition floor was packed with new (and established) silicon manufacturers; it was great to see advances in computational speed, energy usage, and usage costs, which jointly ensure that vision AI applications at the edge are becoming a reality.
This year’s EVS was special as it was our first public release of the Nx Toolkit, featuring Nx AI Manager. Nx partners have built all kinds of video and AI applications on top of our core video operating system for many years (Metadata SDK, Video Source SDK, Storage SDK, etc.). However, truly releasing the toolkit, including the new Nx AI Manager, which makes AI model deployment and vision pipeline management effortless, was a blast. We received lots of interest from the developer community exploring how to mature their video and AI proof of concepts into globally scaled enterprise products – exactly what we enable you to do within minutes using the Nx Toolkit. If you want to know more, check out our developer portal or contact us to get started.
Finally, during the summit, we also had a chance to launch and actively discuss OAAX: The Open AI Accelerator eXchange. OAAX, at its core, is a standardization layer – a common interface – to make trained model deployment to novel edge hardware as easy as possible. OAAX combines two things: First, a standardized interface surrounding the “OAAX Toolchain,” the process that takes a generically specified trained AI model (in ONNX format) and converts it into a specific format that can be run on the target hardware. The high-level interface gives accelerator manufacturers full flexibility to implement their own toolchains, while making usage for developers unified and easy. Second, a standardized model runtime interface, the “OAAX Runtime.” Here again, OAAX focuses on standardizing the interface such that developers can run inferences on any chip in one unified way, while providing full flexibility to manufacturers.
At this point in time, OAAX consists of a number of toolchains and accompanying runtimes that we use internally; we have open-sourced their implementation and specification. In the next few months, building on the excitement raised during EVS, we will be building the organization surrounding OAAX to further mature this open project.