Balls, cubes and pyramids

Or — how to create concepts?

Consider a robot hand exploring its environment by touch alone.

Add visual aspect, or a camera.

Then there is verbal side, written and spoken, text seen and voices heard.

The hand is given a bunch of objects, the camera is viewing the action and the verbal – bot, chatgpt or whatnot – is chatting about it.

How do they agree on simple things such as “what is a ball”? For a human that is trivial, after the first 2-4 years of verbal and eye-hand coordination learning and development.

But how would anything similar be built for machines?

The robot parts would be facing something like the illustration below – generated by AI, obviously.

What is a concept?

For humans, a concept is an internal representation in the mind of a person. Concepts are formed automatically based on the lifetime experiences and observations of a self-motivated individual. Concepts are shared through communication, language, gestures, shared behavior and thus have commonalities.

Concept autodiscovery in research lit

For robots, for AI, concept formation could be similar. Tenorio-Gonzales and Mordes propose a method for “Automatic discovery of concepts and actions” (https://doi.org/10.1016/j.eswa.2017.09.023” where there is an “intrinsic motivation to discover new concepts, states and actions to learn behavior policies.”.
In other words, a learning system should be programmed with goals and aspirations to drive the machine to discover it’s environment.

Actual representation of a concept is assumed to be a graph. In order for concepts to be compatible between different sensory systems, the concept graphs need to refere beyond the neural representation of sensory input, i.e. the set of visual cues of “roundness” need to be connected, but not the only topic related to a concept “sphere”, or “ball”.

Aino Concept Repository

Aino Repository system will provide a Concept Dictionary so that the components can register and query concepts understood by the part. Concept Dictionary contains Knowledge Graphs with language elements so that a motor unit that recognizes and can manipulate a spherical object would map relevant sensory and motor operations to other representations of the concept “Ball”.

Concept is a (collection of) Knowledge Graphs (KG) in the text below.

Operations:

Add an item in the Concept Dictionary
Find an item in the Concept Dictionary
Modify an item in the Concept Dictionary (new version or modified Concept – Ball, Football)

Find a KG in the KG Registry – keyword based for humans.
Find a KG in the KG registry – content based for AI component / part discovery.
Add a KG – Component into the KG registry. KG – Component relationship is m-n.
Remove a KG – Component relationship.
Modify a KG – new version.

Runtime operations:

Pass a KG from Component A to Receiving Components (a group).
Receive response from Processing Component to Requesting Component.
Search Component Registry based on a set of KGs (smallest set has one KG).

Summary and next steps

There are ways for ML systems and robots to discover concepts on their own, concept auto discovery. There are also ways to represent them and Knowledge Graphs are a good approximation.

Knowledge Graphs – JSON, or XML – can be used as database keys to store and retrieve information.

Ainolabs intends to build a repository system where digital assets (Knowledge) with possible physical products can be stored and accessed for purchase, i.e. “Robot parts market”.


Published by Aarne

https://www.linkedin.com/in/aarne/

Leave a comment