Method

For our purpose of getting a ”picture of the mind” with the help of words and AI, I designed what I called the "Status Palette". Don't say that these words, which we commonly use, are not important and meaningful to describe the mental state! If you make this mistake, besides psychology, you invalidate all human sciences.

Mental objects are immaterial, even though they are based on neural structures, and I assume that their formation is based on a unique set of individual sense associations. In this "Status Palette" you should assign values for each feeling of the 3 imagined levels "Top/Middle/Bottom".  The mind is a whole and that's why you don't arbitrarily assign values to sensations until you carefully read the "Top/Middle/Bottom" description, each time if necessary. Each type of sensation is found in each part, but some are more important, which is why you will note this degree of intensity. 

There are 6 types of sensations, one for each sense and one resulting from the senses which I conventionally call "animation", the ability of a mental object to move relative to one another. 

Then we have touch, taste, smell, hearing, vision, animation.

In order for us to be able to form an image that represents a person, you will need to provide us with certain data. Remember that although it is personal data, no one else can understand and use it but you. All the information we need is in the Status Palette.


The Status Palette” (sP) is the primary element of my method, which attempts to collect data on the user's current state of mind. To do this we imagine a short list of possible sensations and show the user a snippet of text. This text  is the external object against which the user is asked to note an intensity of experience.

We've simplified our scheme extremely and removed many important elements of detail (here we have about 0.5%). We've imagined a single level (although it would be a minimum of 3).

We have 5 senses: touch, taste, smell, hearing and sight, to which I have added the sense of "perspective" (derived sense, the way the brain places objects in relation to each other).

The mind is a universality in which objects can be identified. Each mental object is unique to each individual, but can support a verbal convention for communication and improvement.  Every mental object (simple/compound) is identifiable by a combination of the 6 senses and is communicable through words. Finding the unique combination of senses present in a given situation can indicate a particular word. The word/words identified can be used to generate an image.

We have: touch (to), taste (ta), smell (sm), hearing (he), vision (vi), perspective (pe), and 

I have narrowed down the possible sensations to the following:

Touch: COLD / HOT 

Taste: SALTED / SWEET 

Smell: DECAYED / FRESH 

Hearing: IRREGULAR / RHYTHMIC 

Sight: GREEN / RED 

Perspective (the sense of space): SLOW / FAST

I collect in the form presented here, the intensity of sensations (1,2,3) and automatically establish a link between them and a string of words. For demonstration purposes, I have translated into simple code (JavaScript multidimensional array) a list of the 100 most used English words. The words identified can be those relevant to the user and they can be communicated to image generation software.


The program that I have designed will create the image that appears to represent your mental at the time of data collection. Examples of images are generated by using DALL-E 2 technology. DALL·E 2 is a new AI system that can create realistic images and art from a description in natural language (https://openai.com/dall-e-2/). The subject and data which I used to generate the image are confidential. 

Example 1 /  Example 2  / Example 3 

See your soul (S.Y.S.). Here is the description for the software. I have two types of data sets. A type of sets that collect data from users and others that define the mental object. For starters,  I set the matrices for the 100 most used words, as a dictionary of minimum 20 terms (key/value) for each.  The Status Palette collects the user's inputs. Data collection is done by using synaesthetic procedures. After data collection these are compared to the corresponding mental object. The corresponding mental object is a word and I sent it to the image generation software. The result is what I call a "mindgraphy". The image can be used for biofeedback. But aesthetics has its role too, doesn't it?

Sample code in PYTHON

Find closest dictionary to sP

Create an image using predefined text in Python utilize the Pillow library

Suggested insertion for only one of the senses

Create web applications in wich words are transformed into images 


The link from the image below is to a short article about of  the concept of mindgraphy:

_______________________________________ 

We own or possess adequate rights or licenses to use all trade names, service marks, service names, patents, patent rights, copyrights, original works, inventions, licenses, approvals, governmental authorizations, trade secrets and other intellectual property rights and all applications and registrations therefor (“Intellectual Property Rights”) necessary to conduct their respective businesses as now conducted and as presently proposed to be conducted. None of our Intellectual Property Rights have expired, terminated or been abandoned, or are expected to expire, terminate or be abandoned. We had no knowledge of any infringement of Intellectual Property Rights of others. There is no claim, action or proceeding being made or brought, or to the knowledge, being threatened, regarding our Intellectual Property Rights. We are not aware of any facts or circumstances which might give rise to any of the foregoing infringements or claims, actions or proceedings. We have taken reasonable security measures to protect the secrecy, confidentiality and value of all of their Intellectual Property Rights.

Mindgraphy