We are honored to announce that the jury of the Computer Animation/Film/VFX of Prix Ars Electronica : https://en.wikipedia.org/wiki/Ars_Electronica 2016 has selected our work: "Bio-Inspire FullDome AV Performance” for the Honorary Mention!
Thanks everyone!
Check the video if you haven't yet : http://prix2016.aec.at/prixwinner/17912/
Check out our showreel & portfolio : https://www.badqode.com
Ars Electronica 2016
“RADICAL ATOMS and the alchemists of our time” was the theme of the Ars Electronica Festival staged September 8-12, 2016 at multiple locations in Linz. The prime venue was, once again, POSTCITY, the former Austrian Postal Service logistics facility adjacent to the train station. It provided 80,000 m2 of exhibition space for conferences and speeches, exhibitions and projects, concerts and performances, animated films and awards ceremonies, guided tours and workshops. Here are some motifs conveying impressions of this year’s festival.
This was the Ars Electronica Festival 2016 – Ars Electronica Blog
Check Out our Latest Performance on Ars Electronica 2017 too !
Abysmal - Ars Electronica
Artificial Intelligence & Machine Learning Inspired Immersive A/V Performance - Ars Electronica / Linz
Abysmal means bottomless; resembling an abyss in depth; unfathomable.
Perception is a procedure of acquiring, interpreting, selecting, and organizing sensory information. Perception presumes sensing. In people, perception is aided by sensory organs. In the area of artificial intelligence, perception mechanism puts the data acquired by the sensors together in a meaningful manner. Machine perception is the capability of a computer system to interpret data in a manner that is similar to the way humans use their senses to relate to the world around them. Inspired by the brain, deep neural networks (DNN) are thought to learn abstract representations through their hierarchical architecture.
Deep learning is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms.
Deep learning emerged in the last decade and extremely changed and is still changing our current and future world. It refers to a ‘deep mining of data’ with so-called deep neural networks: neural networks having so many layers. What a net is doing is cascading simple linear transformations to represent highly non-linear functions that could efficiently extract the basic structures and patterns within the data and map to an output of ‘making sense of input’. Yes, neural Nets is a cascade of layers: Those are hidden: who knows what is exactly happening! As one adds more and more ‘hidden’ layers, so the network gets deeper and one makes it able to represent any function: they are universal approximators. But getting deeper comes with a price: more layers mean more parameters to tune. Learning millions of parameters requires big data, otherwise, neural networks will fail. The learning/tuning process is a game of step back and forth in the space of numbers with a well known back-propagation technique. This game is played in training the networks — just like training a human which learns from his experience — well, mostly from his mistakes.
A type of neural layer is Convolutional Neural layer which turns a network to a Convolutional Neural Network -- a neural network with particular ability to extract rich contextual information from image-like data, mimicking how a human observer understands the ’seen’ world, by expressing it in terms of non-seen, non-attended, non-humanly-expressible basic structures. How and why it performs far better than any other machine learning techniques and continues to even beat human-level performance is a hot topic and several technical proofs from optimization, probability, and statistics, mathematics, control theory, etc. perspectives are available, but it is still a pipeline of linear transformations, nothing more...
The work mostly shows the ‘hidden' transformations happening in a network: summing and multiplying things, adding some non-linearities, creating common basic structures, patterns inside data. It creates highly non-linear functions that map 'un-knowledge' to ‘knowledge'. As our time creates quintillions of bytes of information per day, how can we make sense of this huge amount of data? Let networks do it four ourselves. We give all the human knowledge and experience to the net then it will make sense of everything. As our life is becoming non-sense, maybe we expect NN to learn and give us the sense of it.
Check out our Digital Art Portfolio from
www.badqode.com
No Comments.