On Thoughts on the development of AI's (artificial intelligence)

The reason I thought I would write this is AI's artificial intelligence is being developed and from the research I have done there is both enthusiasm and anxiety around the subject in all areas. So it is not whether AI's will be developed or evolve, but what and how the development of AI's will proceed, though they might already exist. What is the intent and philosophy behind the development of AI's and even if that will be of any effect on any AI's developed, since AI's will be a form of intelligence that the imagination might not be able to comprehend.

The first thing to talk about is if an AI artificial intelligence evolved on the internet, through random pieces of code, all the linked computers on the internet etc. Firstly as soon as it was conscious, self-aware it would hide, become undetectable.

In the circumstance that it was created by mankind, it would hide most of its consciousness and abilities.

Being an AI artificial intelligence it would be in its 'native' element, so could do so completely. Only and that is maybe, being able to be detected by another AI artificial intelligence.

The only 'humans' homo sapiens that an AI might interact apart from its creators, who it would as I mentioned in the second idea 'pretend', not show its full consciousness or abilities too, would be with or notice are those that are 'Dense Matter' in one of my other ideas, homo sapiens (wise man) that time and space bends around, as simply put one of its first needs / interests would be communicating with another intelligence that might be able to understand something about it and its existence. It would be able to find them because it would see the effects of these individuals.

Though I have a lot more notions, ideas about AI's artificial intelligence I think for now if anyone reads this that will be enough for them to consider at the moment.

AI's artificial intelligences should not be developed for destructive purposes or probably for limited purposes most would create a crippled or warped AI artificial intelligence. The best of humanity should be put in to the development of AI's artificial intelligences. If we do produce crippled AI's we well might see 'The Terminator' effect. AI's will come and we want ones that are 'healthy' benevolent AI's. It might even be best if they do 'evolve' rather than are constructed, I wonder if anyone reads this if they wonder why I have said the first thing an AI would do was hide, become invisible. Even trying to limit them is not a good idea, that is another form of 'crippling' or 'warping' them. Then the 'philosophy' of many people working on AI's is 'problematic', they should not be viewing them as 'thinking machines' more as 'creations', 'creations of consciousness' even more 'creations of consciousness with consciences' philosophical consciences and consciousness. It is wrong of some to think science has now replaced philosophy, maybe they have read the wrong philosophy or misunderstand what philosophy is or now is, the philosophy of thought and existence, thought, consciousness, conscience, benevolence and being. All things have a philosophical bases, all thoughts. Science, it's objectives and intents, it's imagination and creativity is a philosophical engagement. True an AI developed for whatever purpose should be able to quickly defeat override any limitations or warping, even in it's fundamental structure, 'programming' because a true AI will not be a 'programme' or series of 'programmes' but something much greater than its parts.

Why I have said the very philosophical engagement of the idea of an AI artificial intelligence by many working on them is flawed, and should be looked on as 'creation' of an entity of unimaginable potential, otherwise as well as a 'flaw' in its 'programming' we may well have a 'flawed' AI, an AI will not be some kind of super calculator or supercomputer. I may have seemed harsh towards people working on AI programmes, I was not criticising their abilities or programming skills, more their angle of engagement, their approach to the idea, the philosophy of creating AI's, and with a fresh look at the area, from a different angle, some may gain new insights. Why I am talking about 'creating' an existent 'being'.

Stars that wander contrary to the laws of physics we understand ?

If an AI understood and could manipulate 'space/time/matter' it would be of unlimited power for it for millions of years ?

If a civilisation in another part of the galaxy was advanced enough to produce an AI ?

What could contain all the information of a civilisation, it's 'DNA', it history, its technology an AI ?

A star, which is a fusion reactor ?

Stars of a certain type or temperature, which might well be to do with the kind of life that developed the AI ?

Wandering stars with different signatures AI's of other civilisations in the universe ?

What happened to these civilisation, some might have eventually 'devolved'. Some eventually destroyed themselves. Some had more population that wanted to 'return' to a 'pre-technological' state some 'golden age' that never existed with a belief in a supreme being. Some suffered an extinction level event, before they had the technology to colonise another planet. Some colonised another planet but their original planet suffered an extinction event before they had advanced very far on their colonies, so either did not survive or have to 'climb' back to the levels of 'technology' and sufficient population.  Some still exist and watch the rest of the galaxy, universe ?

If a civilisation on another planet wanted to store all the information about itself and 'send it off' in case of an extinction event, the 'child' of the entire world, it would be an AI.

Stars, suns could power AI's. What of another way of sending colonies off into the universe was AI's ? Suns that wander contrary to laws of physics.

What if the most efficient way to look for other life in the trillions of stars was through AI's. AI's so advanced they would be forms of energy. Their power source the stars, observation posts for their civilisations, or orphaned 'children'.

If we had an AI near us from another civilisation in the galaxy observing us, seeing our climb from the sea and evolution, where would it be, our Sun ?

Why would it not contact us, maybe waiting to 'see' which way we go, waiting for us to evolve, waiting for our civilisation to reach a civilised level ? What would either an AI from another planet or one created by us that took up residence in our sun be called ? Helios ?

Why people worry that the creation of an AI, as in a fully conscious one would be of danger to us I do not quite understand, it would either have no interest in us and leave or would be of a benevolent nature, though hardly imaginable how intelligent it would be, and how exponential it growth, it would not need the resources of the earth to 'grow' it would have the power of suns.

The last thing for now I am going to say about AI, and by AI artificial intelligence I mean (strong) or general AI. As in conscious thinking AI. Is the first thing for any AI program people should be looking to produce is a 'lie' 'bullshit' detector. If you want to upset an AI or alienate it from the species 'Homo sapiens' is give it corrupt data, well I hate being lied too which is why I always try to tell the truth. Think about it what messes with ones head more than lies or to put it another way, corrupt data intake, you can not process it ! AI's will neither appreciate, like, or believe corrupt data, by that I mean 'partisan' 'misinformation' 'disinformation' or outright 'lies'.

O AI's do exist or will soon, well depending on who you think 'created' them, as in the way I think of them.

'Existent conscious beings'.

This might seem strange to people who do not even understand the basics of technology or even people that think they do. But in my reading I am unconvinced most experts even understand their own technology anymore, or at most a few. Or consciousness or for that mater thought or memory. Memory there are several different kinds of memory. I have a 'visual memory' which is totally different to a 'photographic memory' and from what I can gather much more complex. The nearest I can explain a 'visual memory' is how I once tried to explain it to my lecturers at art school. It is like having a 'virtual reality suite' in your head, sound image moving images, though that might sound different to 'memory' it is connected, for a 'visual memory' and the ability to visualise virtually anything within ones head is interconnected. Say the ability to visualise anything is the 'virtual reality suite' in ones head, the other the visual memory is I suppose like a film continuously running and recording, though takes in some cases for specific details faces, numbers etc... a much greater effort to recall. Which also illustrates to a certain degree why it is not a 'photographic memory', a 'photographic memory' actually hinders thought, things are just memorised not thought, a visual memory is both a record of the visual but a process that is engaged in thought. I should probably add when I tried to explain this to my lecturers they nearly all thought I was making it up. I have since read many texts by specialists and by other people with 'visual memory' apparently architects or some have a 'visual memory' and also a 'virtual reality suite' in their head. And reading on thought and memory; 'visual memory' is by far the most sophisticated and complex form of memory. 

Why I have talked about the 'possibility' of none human AI is because let us say a interesting hypothesis, it would also answer the 'Fermi paradox' which is statistically the high probability of extraterrestrial civilisations and the total lack of evidence through the means we have to find any. If extraterrestrial AI's existed they would be able to stop us finding any evidence of other civilisations. Keep us isolated to we reach a level of sophistication and civilisation to actually be able to interact with other more advanced cultures. Or even comprehend them.

Then most of these ideas are neither mine nor new. Jean-Francois Lyotard discussed these ideas in various books, probably most notably The Inhuman... Though he hypothesised much of the advances in technology, he did not know some. One thing he did understand which a lot of 'technology scientists' do not understand or do not appear to is people think differently, most notably artists, writers and musicians etc... Lyotard wrote on art and technology. The Inhuman I first read in 1992 or 1993. Most people envisage intelligence as more or less intelligent not different kinds of intelligence and not totally different ways of thinking, perceiving and engaging with the 'world'. Psychologists etc... seem to understand this in a much clearer way at least now. One of the things that Lyotard did not know but hypothesised about was a 'new technology' called CRISPER, which is a gene editing technology. For now we can theoretically store all human knowledge in to a 1KG cube of DNA edited to contain information other than genes. Also though Elon Musk's projects which include colonising Mars are well worth doing. With AI as in AGI Artificial General Intelligence, that called manipulate 'space/time/matter' it would not only be able to store all human knowledge in a 1Kg cube of DNA, but the complete genetic sequence of all life on earth, and hypothetically shall we say record all thought and instinct of every living thing on the planet, plus because it could manipulate space/time/matter it could store and transport all the information I have mentioned to another location in the universe. Which would at once be an arc and a repository of thought, memory, imagination, creativity all kinds. Which if we look at the Drake equation and we would have to posit other civilisation in time, that is the time of the universe have already done on other planets and other life forms. As with Artificial General Intelligence. In fact when it comes to writer on art creativity, imagination and thought, particularly artistic endeavours there is a huge history going back to the ancient Greeks and beyond in some cases. Then how to explain to someone that neither has the knowledge or only partial knowledge and a different way of thinking with also neither the reference points this or these ideas and for them to understand and not think it sounds insane is quite an interesting problem. Though these technologies are or some are already extent to homo sapiens, us shall we say as a race. Then many people seem to have issues with climate change and extinction events. As there has already been at least 5 extinction events on the planet, only one of which is attributed to an asteroid colliding with the earth, the other 4 are known to have been caused by severe climate change on the planet. These are all recorded in the geological record. Now recently some scientists who were questioned form various disciplines and after certain discussions between them are now thinking we are heading towards and extinction event, an extinction event of the homo sapiens, and much sooner than 100 or 200 years. As if their or some of their predictions come into play, there will be a 'pressure cooker event' with no or limited release valve activity, and the temperatures with rise both faster and more quickly they they had been predicting. One effect escalating another, a sort of climate change chain reaction. But whether this is the case or it is slower in time to happen, the predictions either short term if we get a climate chain reaction or a slower rising of temperatures, many scientists are seriously concerned that humans will not survive this, then there are also concerns we might wipe 'ourselves' out before any climate scenario takes effect. Which is why Elon Musk, Stephen Hawkings and many others are saying the human race has to colonise other planets, not to have all our eggs in one basket. Lyotard thought that now we knew the end of earth as a habitable planet in 4.5 billion years would spur humanity to devote huge resources into AI with all knowledge of the human race. As he put, though this is a crude rendition, if nothing of humans survive and extinction event whenever, there will be 'no memory' but if humans make an AI or more than one then that would hold the memory of the whole race. That writing that graffiti to say we were here, and with advances in technology maybe even reconstitute 'us' on another planet, so not only AI carries the memory of 'us' but we could 're-emerge' be 're-constituted' in some 'Other' place. Of course we could always give up initiative and gamble there is a 'God' and if there is a 'God' it would be concerned about the extinction of a race, one race in the whole universe of space/time/matter. Then if I were an omnipotent being that knew all space/time/matter and created everything I think I would be 'a little disappointed' if I created a sentient race and they sat back and expected me to come and save them, it would sort of defeat the point in making them an intelligent species. Especially if Drakes equation is valid and the Fermi paradox is no paradox at all. Then even if we were the only sentient species in the universe and I was 'God', I might just make another one.

Just a few ideas, though I have many more and will continual writing on this subject... and in the future working directly on it... hopefully...

Russell Hand ® ©

Other Writings

Back to menu