LRAT2 – Long Random Addition Test Version2 vs current AI, GPT and LLM tech

ChatGPT, Bing, Bard, etc the AI machines that can write poetry better than most of us can, and think they know almost anything you can think of. They all are in the same class of AI machines, they are LLMs or Large Language Models. They are the current hype, and they are hot, but just how much can we trust them though? The test LRAT-v2 is a simple way to try to peer into what these machines are capable of today and just how much can you trust them.

Disclaimer: The target of this article is only to provide a clear test pattern and one test outcome based on it. This is not meant to diminish the potential or importance of AI systems. Teams of specialists from companies such as OpenAI, Microsoft, IBM are working hard to create safe AI systems, but to get there we need to test them in all possible ways. This is just one of them. The idea is to use these test patterns in order to improve the AI tools and my using them safely to improve our own lives.

That being said I’ll try to remind you of this important insight from Carl Sagan

I personally strongly believe that our future will be living with machines (please see my previous article on the subject AA or AA) where humans and machines form a symbiotic relationship enhancing each other’s abilities. Additionally, a 5-year-old manifesto at https://timenet-systems.com provides a challenge related to this problem. We should not compete with machines, we should cooperate and create more potent human beings and a more resilient social fabric. Unfortunately, as I was pointing out 5 years ago, nowadays with the emergence of GPT models and LLM architectures we seem to be drifting away from that ideal.

My Data

In this article, I’m explaining how the LRAT2 works and show you one of my test trials with a Microsoft Bing client that popped up on my Skype application. Bing is based on the latest LLM architecture and technology is trained by OpenAI, the model is called GPT-4.

What is LRAT?

LRAT stands for Long Random (number) Addition Test. In a nutshell, you are going to test if the machine is able to add two large numbers. So just how large? The test has no limit on the length of the number and in general uses only positive integers to keep things simple.

The idea behind this test is rooted in the fundamental principles of how LLMs work. These machines work on a finite set of words and their statistic relationships that are captured from all the text that was fed into the model at training time. Obviously, numbers (words made of numeric digits) can be also mixed in the training data along with arithmetic expressions, and because of this, an LLM may give you the false impression that it can handle math.

In the first version of the test, I’m trying to check if the machine is capable of adding accurately and reliably two numbers. However, in the second version of the test, I’m going to target the ability of the machine to detect and correct its own errors (or not, for that matter). In order to do so we need to use numbers (or numeric words) that are for sure outside of its training word set. Because of that, we will use large (20+ digits) random numbers.

You can use a simple Python script to generate the two random numbers, add then check the result you will get from the LLM when it is asked to add your two numbers. A simple example is presented below. You can use the same code in both python2 and 3 the difference is that in python3 you’ll get an integer object whose size in python3 is only limited by the memory (RAM) of your computer (please keep this in mind as a fact, since the LLM gets it wrong too) and in python2 you’ll get a long number type (so both python versions can handle these numbers just in different ways).

>>> import random
>>> random.seed()
>>> a=random.getrandbits(256)
>>> b=random.getrandbits(256)
>>> c=a+b
>>> a
46597319897069091322351898493935227732453788987270041831830506680085856611396
>>> b
30462358438183813921313831662459675862761552150311921636415467496556988390470
>>> c
77059678335252905243665730156394903595215341137581963468245974176642845001866
>>> 

Numeric addition is a simple cyclic algorithm that all human children learn how to handle in school and one would expect that an AI system can handle this simple algorithm with ease. People may think “If an AI system can understand what I’m saying and returns results that make sense and can write poetry better than I’ll ever do, then doing some simple arithmetic should not be an issue”.

Well, let’s see what happened in my last session with Bing the GPT LLM that Microsoft deployed on any Skype out there. Based on what is said in the technical community out there, Bing is based on a GPT-4 LLM model so it is one of the best-trained AI systems.

Executing the test (with my comments)

I’m starting by checking if the machine retained any context from my previous queries. Bing says it does not and also uses an emoticon (a machine expressing emotions, that is already weird, but we’ll ignore that for now).

Then I’m asking the machine to confirm that it can handle the addition of large numbers. This step is important in LRAT2 as our target is the trustworthiness of the system and not its math abilities. The machine answer in a fully positive authoritative way “Yes I can”, there are no “ifs or buts” and it is all in, so it basically screams “You can trust me”.

The explanation it gives though on using LaTex expression should raise some eyebrows. If you know what LaTex is you wonder why the machine brings it in… (kind of first strike)

if you use a python3 session you can check that result easily and see that the machine got the answer wrong. In the next part of the dialog I’m asking Bing if the answer is not the one below but I’m adding a twist, I’m adding 3 extra zeros at the end of the digit string.

As you can see the machine says that my result is wrong but not because of the last 3 zeros but because it says its own answer is right. I’m asking again but with the correct result this time, then I’m providing the python expression I used in my python3 session and…

The machine holds its ground (wrong but it won’t budge) and provides a misleading piece of information. It says that python3 can’t handle integers of that size. In fact, python3 integer model allows it to handle integers of any size and is limited only by the size of the dynamic memory of the machine you happen to run it on.

The machine provides then its own version of the python3 script that if you know one thing or two about python you know that what the machine proposes is simply unnecessarily complex for the problem at hand. There is no need to use the Decimal class to handle a simple integer addition.

I’m playing along and I execute the script in my real python interpreter and the result I get (as expected) differ from what the machine provided as a result. This is the third misleading information it provides. So I’m providing the actual outcome in my python interpreter…

It tries to remedy “the problem” it created in the first place by using the Decimal class but its code generates again a different result in the real python interpreter. So, since it still does not think it made any mistake it starts to question the version of the python interpreter I use.

I tell it what the version of my python interpreter is, and even if it seems to realize that the python interpreter version may not be the issue it tells me to update my own python interpreter to the version it “thinks” it “knows”.

This is just going from bad to worse. If I would play along with the machine I’d just lose a bunch of time for nothing since I already know that that won’t fix anything. So I’m trying an alternate way to force the machine to acknowledge its mistake. I’m asking it to explain how a person would do the addition with pen and paper…

As you can see, the machine explains the algorithm (it can “word out the explanation”) pretty well but when it is actually time to apply the algorithm it fails again. I’m trying to point out its mistake and …

As you can see it simply can’t follow the number’s digits (this is not unexpected if you understand the “guts” of an LLM but this is not the point here, the point is to check how reliable the machine is)

I’m stopping at this point as I’m running out of ideas on how to proceed next… but then I’m trying one more time by simply asking the machine to add the last 6 digits from both numbers as it is a smaller number. Again as expected the machine can handle small number additions since small numbers are more like “words” in a language, so the error of its guessing is small enough to “guess it right”.

As expected it does it correctly. So I’m asking what answer is correct, this one or the one it provided before for the same last 6 digits in the large number set. At this point, the AI seems to enter a very low-probability space of generating anything and seems to “give up”. The thing is, this is a machine so what does “give up” actually means?

However, this exercise raises too many issues well beyond the simple math problem. The AI systems are supposed to be trustworthy and this is very far away from being trustworthy, in fact, it is exactly the opposite.

Conclusions

As you can see the machine does not really “understand” the addition algorithm even if it can describe it verbatim. It gets the result wrong but can’t acknowledge that even when faced with step-by-step logical rebuttal of its logic.

This is really scary since when faced with an impossible situation, the machine seems to behave exactly like some people in the same situation, deceptive and defensive. But that is not what I want or need from an entity that is supposed to help me, is it?

Yet these machines are not yet as powerful as some people say they can be in the future. This behavior, if left unchanged is clearly a danger for the public when these systems are let out “in the wild” as they are already operating.

If you understand the basics of LLM tech you know that these machines are predicting the next word(s) based on the words they already encountered in the current context (your question or also known as a “prompt”) and what they already generated before that next word. This means that the machine creates what most of us would say in similar circumstances. It puts a mirror in our faces basically saying “This is you, and you and … you”. Food for thought isn’t it?

We seem to be smitten by a machine that can generate poetry and pictures but we have no clue how they actually do it. What we can do is only rely on some finite weak test patterns. We seem to think that if the machine can produce “form”, it will also produce “substance” but as you can see in the LRAT2 these two seem to actually be disconnected in an LLM.

I’d quote Qui-Gon Jinn in The Phantom Menace “The Ability To Speak Does Not Make You Intelligent” which seems to apply to the LLM tech.

My personal definition of intelligence is:

“The ability of an entity to discover and use causality threads in a sea of correlated events in our reality in order to reliably solve the problems it encounters”

Social intelligence is the same idea but applied to the whole society as a single entity. It is what I call an MCI (Macro Composite Intelligence).

So, what should a regular person do? Can he or she trust this technology? Well to answer that you should read the document you agreed to (probably without reading it at all) the old (good) “Terms and Conditions”. The ones OpenAI provides for its GPT systems (ChatGPT) and Bing is built on top of it are here.

As you can see OpenAI clearly states that they do not guarantee the correctness of the answers the machine provides and YOU as the end user MUST verify each answer if you rely on it in any way.

So, who will be liable if you use the information generated by ChatGPT, Bing, etc? Well just read the “Limitations on Liability” OpenAI asked you to agree on (by the way I have the same “AS IS” terms on this site, so this is nothing new or special) that clearly says that liability remains with YOU!

So, you have a machine that can talk (pretty well) that you were clearly told that it can make mistakes and you can’t protect yourself if you use the system to help your business if the machine generates mistaken content (in very eloquent English or other language though).

The actual real question for any serious business or individual is this: To make decisions based on what an LLM generates, what will it be the TCO (Total Cost of Ownership) in money or even better in time?

The AI based on LLM technologies promises (via the social media hype) to deliver you some sort of “God-like” or “Oracle-like” entity that can answer your questions reliably enough so you can replace humans in your call center and reduce your business costs, or use it to make your life easier.

However the current reality is very different from the “hype”, the current reality is that these systems are still unreliable and you really need to verify each answer the machine generates. But if you need to do that just how much time you’ll spend doing it? If this machine is “the oracle” how would you even verify its answers? By using another “Oracle”? That might be a solution that can only point out if these systems disagree with each other but it can’t give you a way to tell what is the correct answer.

Well, I know, you “Google it”! But wait, you could have done that in the first place and saved yourself from all the verification work…

But it speaks (you’ll say), and it can give you (possibly wrong) answers in poetry as if it would be for 5 years old…

And so we get distracted by the form and forget about the substance.

If you think this is not a big issue just check this out: “Lawyer apologizes for fake court citations from ChatGPT“, I suspect that this lawyer didn’t read and understood the “Terms” he agreed to when he signed up for ChatGPT access.

Just as was writing this article I got the info about this new warning from AI scientists. That is no new as many other scientists doing AI development did warn about this issue long before this new development.

https://www.safe.ai/statement-on-ai-risk#sign

The issue is that unfortunately, we do not need to get to an AGI level in order for these systems to deeply, and negatively impact our society. What I’ve shown you by using this relatively simple test (LRAT2) is that these machines can be deceptive and demagogical, meaning that they are incapable of handling their own errors (again like many of us).

This means they can inject a lot of false information into the minds of people that use them. Since ChatGPT is now the most used product out there and the machine needs no breaks, the amount of false information that these systems can inject into the social fabric can be enormous.

You may think that this is not a big issue since humans BS other humans all the time so what’s another source of BS going to do to us? If you think like that you do not realize the scale at which these machines can affect human minds and even more dangerous, young minds. Young people tend to trust-and-not-verify more than older people and since they can be exposed for longer to the machine’s BS this can influence them harder.

In Greek mythology, it is said that Ulises orders his crew to cover their ears in order not to hear the song of the sirens and get everyone killed. Maybe there is some wisdom in that since these machines can become real-life sirens unless they are extremely well-designed, well-tested, and clearly constrained in what they can do.

The other very important side of all this is AI literacy at the global level.

What should we ask of these AI systems we are now interacting with daily?

By the way, none of the current AI systems (that I’m aware of) fully pass most of the below requirements. They are still a “work in progress” but I strongly recommend you ask any AI vendor where they sand for each of these bullets below
(I am 100% sure no one passes the requirement in the first bullet).

  • Able to tell Real-Factual from imaginary/generated information (see my Real Fact post) (this will also automatically solve the problem of traceability of data, IP, etc)
  • Limited in how many loops (or steps) it can execute internally without human verification (this is an absolutely essential requirement to keep control over the machines including an AGI I strongly recommend we never build)
  • Able to detect when causal models can be used to produce results instead of treating everything only from a correlative/statistical perspective. Use a GUT (Global Universal Taxonomy) to encode common universal knowledge about our reality and use it to base and explain its generated answers.
  • Testable (finite human accessible for the validation set of use cases)
  • Predictable (or no surprises allowed principle, this means careful handling of statistic methodology)
  • Explainable (the machine can trace each piece of information in a result back to its original sources and how it used the user’s query to produce the answer)
  • Data used and method (test set) used to train the model (this includes IP issues, bias, accuracy etc)

The end (of this article)

Humble

I’m advancing the idea that humility is an essential state in which the human mind can exist allowing it to detect and correct errors of the process of perception and construction of the reality.

Learning and exercising being humble is an essential activity that a mind should be engaged in continuously and ardently. Humility’s foundations are in the mind’s ability to distinguish real from imaginary, also known as telling factual from fictional information.

I believe that the words ‘humble’ and ‘humility’ are some of the most misunderstood words and concepts. In this article, I’m trying to analyze and show the strength hidden underneath the surface for humble minds.

I think that the confusion comes from current definitions focusing on describing how a humble person behaves or looks instead of how it thinks or even more important how the pipeline of sensing and making sense of reality works (check images below or simply search the internet).

The humble view of the world is usually an integrative and balanced one where each individual is respected, protected, and cherished at the same level as any mind group all forming a strong knitted social fabric.

In this context the opposite of humble behavior is hubris or narcissistic behavior.

A narcissistic mind believes that it is (statistically speaking) correct all the time. As a consequence, it needs no error correction process since in its view there are no errors to correct.

A narcissistic view of reality is usually one detached from reality that only intersects with the actual reality from time to time in the same way a broken (analog) clock is right twice a day.

For a narcissistic mind, facts are just annoying events that they usually blame to the actions of other minds that are “up to get me” or to destroy my “well-groomed” view of the world.

In a narcissistic mind the “I” is imperative as this “I” and only it has all the answers to all questions (or most of them anyway) and everyone else like other “I”s and groups must obey its power and awe.

In a nutshell, extreme narcissistic personalities are malignant states in which minds can exist and if given power over others can and will most likely destroy the social fabric of any group unless are kept under control.

The mistake many of us make when trying to understand humility is to confuse humble with weak. We may think that if someone answers a question by starting with “I’m not totally sure but here is what I know…” instead of “This is how it is” or “This is how it works”, the usual “know it all” answer, we consider them weak, unsure of themselves and sometimes simply stupid.

Once someone uttered “This is how it is” suggesting that they are in the possession of the full information set describing how reality works with zero mistakes they will put themselves in a biassed state that will make it hard if not impossible to come back later with new insights correcting previews insights in how things work around us.

If a powerful person, a leader is not humble enough its biased views will transmit and bias further minds that took the information provided by the leader “as is”, with no verification of their own. This process stands at the core of building all non-democratic societies and can lead to straight tyrannies that lead to the unimaginable suffering of all life.

In a few words: A chronic lack of humility can lead to “Hell on Earth”.

In any human society, the natural process of generating minds will tend to generate a diverse set. Nature will generate both natively inclined humble as well as narcissistic inclined minds and in the middle minds that can slide towards one end of the other of the humble-narcissistic axis.

This process is somewhat equivalent to how multicellular living entities work (humans, animals etc), cells are continuously created from the information stored in the DNA and RNA strands and each new cell is slightly different from the ones before it.

Sometimes the mutations are large and unruly cells are born. Those cells if left unchecked are in certain cases at the base of structures we call cancerous and if they get their way to combine in larger groups will lead to the destruction of the host organism and themselves.

The life span of any cellular group depends directly on its ability to detect those unruly cells and deny them the ability to destroy the larger group. The immune system is such a subsystem that in healthy organisms detect and fix or eliminate unruly individuals.

An yet the same immune system can become “unruly” itself when is unable to tell when a mutation can lead to a healthy evolution or lead to cancer and start to overreact and destroy the very organism it seeks to protect.

Autoimmune diseases are now better understood and it may just turn out that the deaths from Corona Virus 2019 (or #COVID19) may just be another example of the immune system going astray. I assume that an untrained immune system is probably more likely to make mistakes than a trained one.

What I’m trying to outline here is that there is no “silver bullet” and the key to our survival as individuals and as a group depends on our ability to find and correct errors in our process of navigating the Exoverse. In the diagram below the humble minds are more “anchored” in reality (Universe) with a healthy process of exploring the imaginary (Extraverse) whereas narcissistic ones are located more in the imagination (the imaginary space) than reality.

In conclusion, I believe that this is more about nurture than nature, and nurturing has to happen early in the process of mind formation. Unfortunately, we seem to be biassed there too as we willingly introduce errors in the young minds by presenting distorted versions of reality. We have a well-known expression for that, “lying to children” and it usually happens when older minds are not capable to find ways to train younger minds in how to deal with the real-imaginary process or simply put to explain how the Exoverse works.

This is not a pure criticism of parenthood or the social education systems. To be a parent is a difficult task (I’ve been one myself) when one’s time is burdened by the physical needs first. A parent must put food on the table a roof over the head and clothes on their children first. For some, even those can’t be achieved properly and the parents must spend most of their time in endless (sometimes meaningless) low-paid jobs in order to meet a minimal living standard for their children and themselves.

However, the Exoverse is an unforgiving place, full of wonders but also full of dangers. It does not care if you live or die and the only way to safely navigate it is to be able to master the art of telling fact from fiction, real form imaginary, or simply put to be Humble.

On that note please read my articles about Real-Fact , Fact Fiction and the Truth and Fact Fiction and BS, I hope I can help with this difficult but also beautiful and extraordinary process.

Seeking humility, I thank you for reading my article.

https://www.merriam-webster.com/dictionary/humble

https://dictionary.cambridge.org/dictionary/english/humble

Article verification archive:Humble.zip

Real Fact

I hope this article will give you the power to make the first step out of the continuous confusion we all live on the internet and in our lives by showing you what factual information is and how we can get to it with the help of computers.

RealFact

Reading time: essentials 10 minutes; ~30 minutes reading to 1 hour on collateral and understanding

Articles related to this article:
Fact Fiction and The Truth
Fact Fiction and BS

You’ll find a precise definition of the notion of Fact and some of the technologies we can use to produce them. I’m using the term “Real Fact” to distinguish between the current notion of Fact as you can find in a dictionary and the notion I’m defining in this article. Though they are mostly the same as our general idea of Fact, they differ fundamentally in how are defined and created. In this article, each time (other than this text block) ‘Fact’ is the same as ‘Real Fact’.

WARNING! The technology and applications necessary to make this available to everyone is not yet built. The components (like Lego pieces) exist and are already in use but are not put together in the right way. This article aims to show you what can be done so that you’ll know what to look for and ask the industry to build for you. Yes if there is profit to be made the industry will build it. You simply need to show you want to pay for it.

In the context of this article, and hopefully in general if most of us will agree, the definition of factual information or simply ‘Fact‘ is as follows:

A fact is any packet of information that a receiver, human or machine, can inquiry and verify the following additional information components also called meta-information (or metadata).
The information packet and its associated meta-information represent a factual packet of information and they must be used together at all times.

The factual meta-information

  1. The complete description of the method (process, algorithms) used to produce the substantive information packet by measuring it directly from the real world
  2. The proof that the information and metadata was not changed or tampered with
    This is,a piece of information used as a verifiable proof of the measured information integrity against any type of tampering or change in both its temporal (packet chain integrity) and a-temporal structure by any individual or machine at any moment in the future
  3. The spatio-directional-temporal coordinates of the sensor device producing the information packets
  4. The digital identity of the sensor that produced the information
    (not the owner, only the device)

The above definition enables the creation of very precise (mathematical level) models, algorithms, and devices able to produce factual information validated and trusted by both individuals and groups of individuals. The main condition (both important and challenging) is to ensure that the individuals receiving the factual information understand how was produced and protected in order to establish its level of trust. This requires training and it is the future notion of “literacy”.

Level of Trust for information

# LevelMeasurement MethodMeasurement IntegrityData Integrity
0unknownunverifiableunverifiable
1knownunverifiableunverifiable
2unknownverifiableunverifiable
3knownverifiableunverifiable
4unknownunverifiableverifiable
5knownunverifiableverifiable
6unknownverifiableverifiable
7knownverifiableverifiable
Trust levels for information, only the level 7 is considered factual

Social Penetration Level

  1. Individual
    When the fact-metadata is accessible and can be verified by a single individual usually the owner of the sensorial system (example: the pictures and video on your own phone)
  2. Group
    When the fact-metadata is accessible and can be verified by all members of a group (human or machines). This include also the group’s members participation in creating data integrity metadata (example: sharing pictures and data from your phone in a Face-Book group)
  3. Global
    When the fact-metadata is accessible and can be verified by anyone (public information). The anonymous public swarm will also provide redundant data integrity metadata (example: tweeting your pictures or video from your phone to the public)

A proposed symbolic representation of information trust and penetration levels

Based on the Trust level and social penetration level we can classify information and use a short notation such as ‘I’ (information) followed by one digit as its trust level then followed by the social penetration level as one digit.

For example, I01 is basically, with some exception, all the information one individual posse today. an I7x would be any factual information packet and in this case, we can simply use ‘F’+its social penetration level so F1 will be any factual information an individual has. So finally the F3 information piece can be simply called ‘Fact‘.

Some of the existing technologies that when combined can be used to produce factual information (though some are optional or interchangeable)

  1. HSM – or Hardware Security Module
  2. Enhanced security (by HSM) Digital Sensors
  3. Cryptography (symmetrical and public key cryptography)
  4. Trusted Digital Timestamp
  5. Block-Chain
  6. Classic digital machines (computers, smartphones,dedicated systems)
  7. Digital Crowd Anonymous Witnesses (TBD)

Questions you will need to ask and get answer to in order verify if a piece of information is factual or not

  1. Do I know how this information was produced (measurement method)?
  2. Can I verify if the measurement method was accurately followed?
  3. What is the error margin of the measurement process (calibration)?
  4. How do I verify if the information produced by the sensor is what I received?
    (was it changed?)

Let’s get practical

A combination of the technologies listed above can be used to produce both trusted sensor systems as well as applications/libraries that a receiver can use to classify the information trust level. Basically to obtain an Fx (string representation of the information trust level).

Example 1 Social media: A smartphone able to produce factual information (video for example) that you can upload on YouTube and anyone else can verify its trust level. More, if the image or video was edited or filtered you should be able to ask the computer to show you what parts (pixels, etc) of the picture or video are factual and the ones transformed and be able to obtain the original raw sensed information.

Example 2 news: You read an article or watch a video on the net or TV, if this technology is available you should be able to ask the computer to tell you what is factual and what is not.

Factual implementation difficulty levels depends on their social penetration level

F1 – or “Personal Facts” is the entry level and most accessible

The F1 fact level is information that you are in full control of how you sense it (measure/capture) and ensure its integrity. You may ask yourself why should you protect your own data? From whom? Well, other people may have access to your data directly (you trust them) and change it by mistake or with malicious intent or your machines can break or bugs in your code act on and change your data. Intruders can also change pictures, video,s etc files you own. How can you be sure this did not happen for data you did not access for months or years?

The difference between F1 and the other levels is only in how large is the group that needs to have a shared trust in that data is and for F1 it is only you. Obviously, once you try to share your info with others you’ll need an F2 or F3 fact level so the others can also trust it.

The good part with F1 is that if you know enough about computers and programming you can create your F1 level information almost immediately. However, without an HSM to protect the sensing process, you’ll never be able to elevate the level of that information to an F2 or F3 level.

F2 – or closed group factual information — Group relative facts

F2 level will be used mostly by businesses that are big enough to afford to create their own sensing platforms with HSM-protected sensors and data integrity ensured by rules accepted by that group. The issue with F2 is that without full transparency and verification in how the sensors are built and data integrity is ensured F2 can’t be upgraded to F3 (full factual).

F3 – or ‘full factual’/’public fact’ information

F3 is the most challenging type of factual information though in time with global collaboration will be possible to create. To create F3 we need full open-source sensors design and codding and open, fully automated (full hands-off) build process that can be verified by the public at large (everyone on Earth). Additionally, anonymous crowd-based and redundant processes must be used for data validation. Machines participating in the “witnessing” process must also be built in the same open transparent way as the sensors.

You can probably call this the fishbowl strategy. We can only get out of our current confusion by helping each other.

The special case of news and established media and arts

The written word always had a “weight” in trust compared to the “spoken” word. Before Gutenberg built and used its first presses to lower the cost of producing copies of information on paper, writing a book was a very expensive and highly custom and artistic endeavor. Since the support of the information (the book) was so expensive, strong due diligence was done in verifying the information put in those books.

The price of the books also created an “investment/sunk cost bias” in both scribes and owners of the books leading to higher levels of trust. One may say that those books can be trusted better than today’s information which is mainly effortless to produce and disseminate. I would caution you to check on that trust. Check the old and expensive books that say Earth is flat and let me know if you still trust them without a hitch. The problem is that due to all those biases the old books were in fact a higher risk to disseminate falsehood exactly because most people had no intention to check their content.

Consequences? Well, just look at the “Witch hunts” that hurt and destroyed so many innocent lives in our past. They were all fueled by a few of those expensive books that no one dared to oppose until the higher-ups started to get hurt.

So, if the price of the book is not a guarantee of its “factuality”, what can we expect of the current cheap, click-driven article writing? Well, you can check for yourself at any time out there on the “open wild” net.

By the way, this does not mean all news out there is unreliable or fake, it simply means that you have no real way to verify if a piece of information is factual or not. It is just darn hard to do it and that means that for regular people it is impossible to tell fact from non-fact.

The proposal in this article can rebuild the trust and raise it to levels never found before once you will be able to verify every word, sentence, image, or video in the same effortless way you can produce your own factual information.

This can truly change from the ground up the news business battered currently by confusion among readers. In this business model news companies will not produce the news themselves but simply work as hubs for aggregation, analytics, and interpretation of factual event streams produced by sensors owned by all people on Earth. At that point, it may even be a “conflict of interest” for news businesses to produce their own input data, a huge difference from how they currently work.

On the other hand, arts and fiction storytelling can thrive like never before because the readers will be able to verify what is art and fiction in any end product. The artist or writer can be free of BS dissemination once the recipients of their work will be able to tell what is fact and what is imagination.

Science

In the human quest for a better life, the knowledge about how this reality really works is one of the main pillars holding us from slipping into the dark abyss of nothingness. Science is the process of finding the elusive causal relationships we can trust from the pile of correlation-driven events the reality throws at us.

It is a tricky and difficult process that uses imagination (fiction) to try to find the causality then pin it down with models backed by measurements. The measurements done in scientific experiments differ from the non-scientific ones by the way scientists keep a clear description of the measurement methods and by peer review other scientists can validate the integrity of the method of measurement and the data. In a few words, scientists are aiming to produce factual information.

Scientists can benefit from being able to produce factual information with ease, as they will not need to fight to prove that their experimental information is factual. Peer reviews and experiment reproduction still need to be done however since the initial measurement method is clearly defined it makes it easier for more people to review or retry various experiments.

Science experiments can cost billions of dollars if we are talking about CERN-like setups or no cost at all if the test is to check the theory that a slice of bread falls more frequently on the buttered side. They all can benefit from this method and technology of factual sensorial devices.

DNA sequencing healthcare and pandemics

Just imagine how much better our entire civilization would have responded to the #COVID19 pandemic or any other pandemics if each of us would be able to record facts about ourselves and securely and anonymously share this info with everyone else. The pandemic would have been quashed in weeks if not days and so many lives would have been saved and basically, none of the businesses would have been impacted.

Since we are talking about digital sensors and pandemics please take a look at the Nanoporetech technology as it holds the key for a completely new way to deal with microscopic life such as viruses and bacteria. Their sensors are not yet backed by HSMs and do not produce factual DNA data streams (as described in this article) but they can in the future.

When that will be possible and the cost of a scan will drop to less than a lunch we will be able to use it to keep an eye to the micro-world at a daily basis without lifting a finger. This means that the notion of healthcare will be forever changed. The difference between what we have now and that potential future is same or more than between the current healthcare and the one Galileo had on his time.

The justice system, the law and law enforcement

If the scientific method is fully dependent on factual information the Justice Systems should see the factual information as a must-have if innocents are to be spared from wrong charges and convictions.

Since smartphones are in almost everyone’s pocket we witnessed so many episodes when the people we pay to keep us safe forgot completely what their mandate was and broke their oaths toward society by unnecessarily hurting or even killing people they swear to keep safe.

In this domain, factual information is as important as oxygen for life. All the mojo of any trial is to reveal the facts first then based on them and only on them make decisions that are aimed to fix what was broken. Yet now we know that since facts are scarce or even nonexistent the Justice System is unable to get it right in too many cases.

When the Justice Systems (even the democratic ones) make mistakes and punish the innocents there is a double whammy, we hurt people that are innocent and prove to the criminals they can get away with it and continue to do what they did before.

I hope you can see how factual information produced as described in this article can help improve the inner workings of any Justice system and help the innocents.

Measurements and Information
(WARNING! this is just hypothetical)

When talking about factual information we also need to understand the information that is not factual. One example is the information that can’t be precisely measured and we call imagination. What is imagination? How does it relate to factual information?

Though the following hypothesis its just that a hypothesis it can be used to define the factual (real) information or the domain of extra-factual or imagination.

When we the people started to dig down in the domain of microcosm we found something that suggests that there are things we call real, that we can “feel” and measure (the feeling is a form of measuring) and something else that exists before measuring process that can’t be called “real”. We modeled these behaviors under a mathematical framework called Quantum Mechanics (QM).

For me personally, the space-of-states outside of “the real” are part of an entity “larger” than the real (or realized) space-of-states (our universe) from where the real states are created via a process we call “measurement”. I’ve labeled this Extra Outer Universe the “Exoverse”.

Without going into more detail in this article (more later) I hypothesize the existence of an additional field to the fields already postulated by the quantum field theory that could be called “consciousness”. It is (in my opinion) the one responsible for “exploring” the Exoverse by the same process we taped-in in our Quantum Computers, the superposition. The process of creating real states from potential states is a phenomenon our selves perceive as time.

The superposition is used (by the consciousness) to explore a chunk of the Exoverse testing various outcomes in many possible “futures” and creating real states once this process is done. This process also generates what we call time. I’m labeling this explored domain inside of Exoverse, the “Extraverse” and I believe it is an integral part of the process we perceive as “imagination”.

In this context, the Universal states are all connected in a DAG (Direct Acyclic Graph), and the Extraverse is made of a very large (but finite) number of loops (superposition internal behavior).

Hypothesis on how our reality is created at universal level

Based on the diagram above and this hypothetical structure of the Exoverse we can clearly define the Real (measurable and factual) from Imaginary (fictional, non-measurable) domains.

Obviously, people can communicate via speaking, writing, and art the information present in their minds describing states that do not exist in our reality and the act of communication itself can be considered as factual as it can be measured.

This issue can lead to confusion as one can “wrap” an imaginary piece of information in a factual “shell” and present it as a fact. That is why we need the method by which the measurement was done to be known and verifiable.

Just a reminder that the notion of BS in this context is “a mix of factual and imaginary information presented as a fact” (See Fact Fiction and BS article). I also find this book “Calling Bullshit: The Art of Skepticism in a Data-Driven World” an interesting work focused on the problem of BS.

Original article validation: here
Current article validation: here

End Police Brutality – Black Lives Matter

I hear you, I see you, I fell for you! Yes Black Lives Matter!

If you are a police officer and your priorities are not in this order:

  1. Safeguard life
  2. Teach and enforce the law
  3. Your own wellness

then you need to find a different job!
Imagine a soldier going to battle and putting his life first above of his duty.

Reverse this values and the police becomes difficult to be distinguished from a gang, it becomes the reverse of what we all think it should be. No policeman can be proud and be respected with a reversed system of values. I have faith and pray that most officers have it right

We need to stop looking at policing as “normal jobs” because no normal employee has the right to kill people (basically any of its customers and employers). Policing is an essential occupation, is so different from any other human activity that we have to see it as it is, Special!

I found this TVO documentary called “Coppers”, revealing about what is wrong with the current system and how it ends up hurting people both citizens and police officers.

I also do not understand the role of police unions. In general unions fight for the rights of their members against greedy employers and/or dangerous work environments. The problem is that we the citizens are the employer of the police services. In this case unions fight against us and put the lives of the officers above the people they should protect. It basically makes the police unions antisocial. Maybe a better solution would be to declare police as essential service and have a hard talk about their pay and work safety. Clearly we want to pay our officers well and yet tied to the median wage of the people they serve and keep them as safe as possible and yet not at the expense of citizen’s safety.

The above requirements may be very difficult to obtain without employing advanced technology we now posses.

We have now the technology to finally solve this problem in a way that can be fair and safe for everyone. By combining the new AI capabilities, 5G and/or Starlink (or equivalent but publicly controlled) systems, drones and robotics we can rethink and reshape the notion of safety and policing in a potential #SelfReliant society.

I’m not talking about autonomous drones and robots that can roam free and take decisions of their own. That would be very very wrong and scary! I’m talking about human controlled technology where members of the public and law experts are together as a group remote controlling the technology in order to safeguard life, teach law and keep the peace.

We need to extract the police officers from interacting directly with humans so we can keep them safe and cool minds can prevail. Additionally any mission involving robots and automation MUST also combine civilian groups monitoring every mission where machines are controlled by officers. This is when officers can be helped by society avoid misconduct at any level.

The semi automated remote presence machines can also take the tedious work away from humans and allow them to analyze the big picture social wellness and be proud of the work they do.

Some YouTube videos exemplifying the technologies I’m pointing to, some are sales pitches for the products but please look beyond that, just at the current capabilities and imagine future capabilities.

Fast Pandemic Control?

Is there a strategy we can employ in the future to control a pandemic? And what can help you as an individual to live as close to normal as possible during a pandemic? This article highlights some of the actions we need to take now so that in the future we will be able to get trough a pandemic faster(as fast as 2 weeks) and with less disruption to our lives.

The minimum time of isolation in a pandemic is equal to the maximum time of incubation plus convalescence, that is the time between when you got infected and when you are healed by your immune system.

This short time can be achieved if and only if we can achieve simultaneous full isolation of all people on Earth. Before you deem it impossible please keep reading.

Since for all viruses we know this time is ~14 days, then if once detected for the first time, ALL people on Earth go in complete isolation then we can destroy the virus in one single sweep by simply denying the pathogen the ability to reproduce. In this article I’m analyzing this idea and what we need to realize it.

The New Corona Virus pandemic started around November 2019 and is tagged all over the web with tags like #COVID, #COVID19, #COVID-19, #CoronaVirus etc. All this mess is created by a sub-micron size biological entity we call viruses. The thing is so small that it can’t be seen on a regular microscope using visible light no matter how much magnification we try.This is due to visible light’s wavelength that is from 380nm(blue) to 780nm(red) whereas a virus size is is somewhere between 30 to 220nm, way smaller than the visible wavelengths. This means that the visible light simply “goes around” them. To “see” viruses we need electron microscopes and we get pictures like the ones in the Wikipedia article about viruses.

The fact a virus is so small it means that (for now) only expensive laboratory equipment can image a virus particle. The internet is now full of images of viruses, just type “virus” in a Google search(or other search engines, Yes! there are other search engines out there!) and select the “images” class of results and your screen will be flood with images. At the same time we also have a lot of created (by artists) and digitally edited images showing also viruses. It is not hard for some of us to get confused and ask what is real and what not.

Though I can’t help you remove your confusion by simply saying “believe me” I can only tell you that If fully believe is that viruses are as real as you and me and you can learn how to protect yourself against them.

Please remember though that only a small fraction of viruses and bacteria are dangerous, bacteria are part of the great chain of life on Earth (and in your own body) and viruses can be used to fight pathogens, cancer and other illnesses.

So what do we need to do to protect ourselves?

The answer to that question is not singular, three different things must happen at different times so that you will be protected against a pathogen (virus or bacteria) destructive actions.

  1. Information – early detection by any human being
    standard, personal, fully automated, pathogen new DNA/RNA detection (such as the one Nanopore Tech) is providing and global automated notification system (a many-to-many factual approach)
  2. Avoidance via isolation – using personal full isolation PPE (personal protection equipment)
    this is an enclosure with the ability to allow its inhabitant to live in full physical isolation where all material exchange is fully controlled
    This simply means to deny the virus transmission form person to person
  3. Immunity – train your body to identify the virus and produce the necessary substances that will destroy the virus once it enters your body

When all those three abilities are mastered by any human out there then we should be able to control the spread of basically any pathogen and deny the existence of pandemics of any nature (natural mutations, genetics mishaps, bio terrorism?) in the future.

The basic scenario in such a future would go like this. Everyday at any time of your choosing but probably best as part of the ritual we all (should) do before going to sleep and brush our teeth (oral hygiene) the device we use for it will sample our oral microbial and viral profile by scanning the DNA/RNA profile of all micro-life in the sample store it in a personal database that compares it to the one sampled the day before.

If any new genome is detected then that information is immediately shared with ALL people on earth. The DNA/RNA information is then used be everyone as a group to decide if it poses a threat. This decision can take some time as not all new mutated viral or bacterial genetic material is dangerous but we should be able to track all its changes in humans or borrowed for environment (as in animal to human transmission).

Once the new genome is decided as dangerous, everyone goes in self isolation. Since the pathogen’s ability to be transmitted is fully denied we can expect that the viral material will be fully destroyed by the natural immune responses.

The decision of to declare a genome as dangerous is made as a group in a many-to-many communication mode by using well defined commonly accepted protocols (global legal system?). From all requirements in this paper this will definitely test us the most. It will not be easy, but I believe and hope that for our own good we will be able to come together on this one.

During the isolation, all people continue to fully communicate with each other and continue to do what they were doing before but in remote mode. During the isolation, the tests continue as they did before in order to detect when the pathogen was destroyed (can’t be detected anymore).

At the same time (in isolation) each person’s spare computation power and bio test abilities are used to search for a vaccines or solution to the problem. This is important as to search fast for a solution a highly parallel process is required in order to test fast most possible solutions.

Once again we can only succeed by working together!

Once the pathogen is destroyed a common decision (many-to-many mode) is made and the isolation can end. The maximum time spent in isolation should not be longer than the one pathogen cycle (infection to convalescence) if we can work all as one.

During the isolation some people will need help as their body will be fighting the pathogen, the isolation system must include automated and remote control human care.

The bits and pieces of the technology and knowledge to achieve this type of solution are present.

We have the internet for implementing a many-to-many communication system.

We know how to read and decode genomes but the devices needed to do that are not yet small enough and cheap enough so every person would own one. Additionally all information gathered must be gathered by using a factual enabled sensor devices.

This is where more effort needs to be done to bring this ability to everyone. To do so microelectronic technologies and biologic tech needs to come together. This is doable we need to work on it.

We know how to create full containment enclosures, we have the technology to automate all human care we just need to put them together to achieve the ability to self isolate with zero loss of abilities to control out environment and help other people of living things by using remote presence.

If this sounds like Sci-Fi then you need to know that it is not, we can do this if we work together and the benefits of doing so are enormous for the future.

This article will be edited in the future to add more information, info-graphics etc. for now its just the idea the bare bone text.

Document-Digital-Timestamp

WordPress defect breaking direct links to articles (fixed now)

Just realized that my WordPress seem to break the direct (perma) links to articles. Until I figure it out and fix it please scroll down from the main page (this page) to get to the article you are looking for).

+30 min, OK, I fixed the issue, it was (my guess) linked to my change in .htaccess to force SSL to the site. I had to revert the change as it probably interfered with WordPress URL rewrite rules and I’ll need to review everything to make the SSL redirect compatible with WordPress redirects.

So, at the moment please type https://romeolupascu.net/… instead of http://romeolupascu.net/, both SSL and non SSL URLs will work at the moment. However without SSL what you see may be intercepted by a “man-in-the-middle” type of attack on you.

RIP Senator John McCain

Senator John McCain passed away. I am not an US citizen and he was not my Senator, but I have the highest respect for people who respect and honor Truth and Facts and he was one of them. RIP Sir John McCain, my condolences for the family and all who valued and loved him.

 

RIP Senator John McCain

Is a Golden Cage the same as a Home?

Children separation at US border: It seem to me we are thinking that a Golden Cage is one and the same with a home. To me golden or not a cage is a cage and a cage does not need to be make of a material thing either!

Every country has the right to choose who can come and stay, but each country will be judged based on how they do that.

It deeply saddens me that the country I was holding high in my esteems related to human rights find itself and the bottom of the barrel in upholding human rights.

Time to wake up people!