August 26, 2017No Comments

Abysmal – Ars Electronica 2017 Deep Space

Nothing like our Deep Space 8K to really show someone's work in stunning detail - like "Abysmal". See it live at #arselectronica17 ! # prix ars electronica 2021

Abysmal - Ars Electronica

Artificial Intelligence & Machine Learning Inspired Immersive A/V Performance - Ars Electronica / Linz

Abysmal means bottomless; resembling an abyss in depth; unfathomable.

Artificial Intelligence & Machine Learning Inspired Immersive A/V Performance - Ars Electronica / Linz
Artificial Intelligence & Machine Learning Inspired Immersive A/V Performance - Ars Electronica / Linz

Perception is a procedure of acquiring, interpreting, selecting, and organizing sensory information. Perception presumes sensing. In people, perception is aided by sensory organs. In the area of artificial intelligence, perception mechanism puts the data acquired by the sensors together in a meaningful manner. Machine perception is the capability of a computer system to interpret data in a manner that is similar to the way humans use their senses to relate to the world around them. Inspired by the brain, deep neural networks (DNN) are thought to learn abstract representations through their hierarchical architecture. 

Deep learning is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms.

Deep learning emerged in the last decade and extremely changed and is still changing our current and future world. It refers to a ‘deep mining of data’ with so-called deep neural networks: neural networks having so many layers.  What a net is doing is cascading simple linear transformations to represent highly non-linear functions that could efficiently extract the basic structures and patterns within the data and map to an output of ‘making sense of input’. Yes, neural Nets is a cascade of layers: Those are hidden: who knows what is exactly happening! As one adds more and more ‘hidden’ layers, so the network gets deeper and one makes it able to represent any function: they are universal approximators. But getting deeper comes with a price: more layers mean more parameters to tune. Learning millions of parameters requires big data, otherwise, neural networks will fail. The learning/tuning process is a game of step back and forth in the space of numbers with a well known back-propagation technique. This game is played in training the networks — just like training a human which learns from his experience — well, mostly from his mistakes.

Artificial Intelligence & Machine Learning Inspired Immersive A/V Performance - Ars Electronica / Linz
Artificial Intelligence & Machine Learning Inspired Immersive A/V Performance - Ars Electronica / Linz

A type of neural layer is Convolutional Neural layer which turns a network to a Convolutional Neural Network -- a neural network with particular ability to extract rich contextual information from image-like data, mimicking how a human observer understands the ’seen’ world, by expressing it in terms of non-seen, non-attended, non-humanly-expressible basic structures. How and why it performs far better than any other machine learning techniques and continues to even beat human-level performance is a hot topic and several technical proofs from optimization, probability, and statistics, mathematics, control theory, etc. perspectives are available, but it is still a pipeline of linear transformations, nothing more...

The work mostly shows the ‘hidden' transformations happening in a network: summing and multiplying things, adding some non-linearities, creating common basic structures, patterns inside data. It creates highly non-linear functions that map 'un-knowledge' to ‘knowledge'. As our time creates quintillions of bytes of information per day, how can we make sense of this huge amount of data? Let networks do it four ourselves. We give all the human knowledge and experience to the net then it will make sense of everything. As our life is becoming non-sense, maybe we expect NN to learn and give us the sense of it.

Artificial Intelligence & Machine Learning Inspired Immersive A/V Performance - Ars Electronica / Linz
Artificial Intelligence & Machine Learning Inspired Immersive A/V Performance - Ars Electronica / Linz
3D Animation, Digital Arti, NFT Artwork

Check out our Digital Art Portfolio from
www.badqode.com

FOLLOW US

August 16, 2016No Comments

We are invited to Ars Electronica 2017

We are invited to Ars Electronica 2017, to make a 3d projection show in Deep Space 8K! Abysmal, is a 3d film about the learning alghoritm of an Artificial Intelligence. It will be screening on a 16 times 9 meters wall and once again 16 times 9 meters of floor. If you'll be around Linz/Austria between 7/11 Sept., we'd be glad to share the experience with you. Cheers!

Abysmal - Ars Electronica

Artificial Intelligence & Machine Learning Inspired Immersive A/V Performance - Ars Electronica / Linz

Abysmal means bottomless; resembling an abyss in depth; unfathomable.

Artificial Intelligence & Machine Learning Inspired Immersive A/V Performance - Ars Electronica / Linz
Artificial Intelligence & Machine Learning Inspired Immersive A/V Performance - Ars Electronica / Linz

Perception is a procedure of acquiring, interpreting, selecting, and organizing sensory information. Perception presumes sensing. In people, perception is aided by sensory organs. In the area of artificial intelligence, perception mechanism puts the data acquired by the sensors together in a meaningful manner. Machine perception is the capability of a computer system to interpret data in a manner that is similar to the way humans use their senses to relate to the world around them. Inspired by the brain, deep neural networks (DNN) are thought to learn abstract representations through their hierarchical architecture. 

Deep learning is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms.

Deep learning emerged in the last decade and extremely changed and is still changing our current and future world. It refers to a ‘deep mining of data’ with so-called deep neural networks: neural networks having so many layers.  What a net is doing is cascading simple linear transformations to represent highly non-linear functions that could efficiently extract the basic structures and patterns within the data and map to an output of ‘making sense of input’. Yes, neural Nets is a cascade of layers: Those are hidden: who knows what is exactly happening! As one adds more and more ‘hidden’ layers, so the network gets deeper and one makes it able to represent any function: they are universal approximators. But getting deeper comes with a price: more layers mean more parameters to tune. Learning millions of parameters requires big data, otherwise, neural networks will fail. The learning/tuning process is a game of step back and forth in the space of numbers with a well known back-propagation technique. This game is played in training the networks — just like training a human which learns from his experience — well, mostly from his mistakes.

Artificial Intelligence & Machine Learning Inspired Immersive A/V Performance - Ars Electronica / Linz
Artificial Intelligence & Machine Learning Inspired Immersive A/V Performance - Ars Electronica / Linz

A type of neural layer is Convolutional Neural layer which turns a network to a Convolutional Neural Network -- a neural network with particular ability to extract rich contextual information from image-like data, mimicking how a human observer understands the ’seen’ world, by expressing it in terms of non-seen, non-attended, non-humanly-expressible basic structures. How and why it performs far better than any other machine learning techniques and continues to even beat human-level performance is a hot topic and several technical proofs from optimization, probability, and statistics, mathematics, control theory, etc. perspectives are available, but it is still a pipeline of linear transformations, nothing more...

The work mostly shows the ‘hidden' transformations happening in a network: summing and multiplying things, adding some non-linearities, creating common basic structures, patterns inside data. It creates highly non-linear functions that map 'un-knowledge' to ‘knowledge'. As our time creates quintillions of bytes of information per day, how can we make sense of this huge amount of data? Let networks do it four ourselves. We give all the human knowledge and experience to the net then it will make sense of everything. As our life is becoming non-sense, maybe we expect NN to learn and give us the sense of it.

Artificial Intelligence & Machine Learning Inspired Immersive A/V Performance - Ars Electronica / Linz
Artificial Intelligence & Machine Learning Inspired Immersive A/V Performance - Ars Electronica / Linz

Interested in working together?

By browsing on badqode.com you agree on using cookies.
Find out more about cookies in the the privacy policy 

© GD DIGITAL ARTS 2024

Instagram  -  YOUTUBE  -  MEDIUM  -  Béhance  -  VIMEO  -  LinkedIn  -  DISCORD COMMUNITY

View