2019 Resolutions

New year resolutions, things made to be broken, or maybe this time, this one time, we (I) may succeed in making progress on what really matters now.

I wish everyone reading this article a lot of happiness in 2019 and hope you’ll check to see if the resolutions I’m sharing here make sense to you and your beloved ones. I believe they can make or break our future our children and the generations to come, so we should probably not ignore them.

So what are those few things which really matters to everyone?

Number one: The ability to distinguish Fact (Real) from Fiction and understand the notion of Truth Hash-tag: #ReFiT or #FaFiT UUID: ab3fb2d4-9cfa-4f72-9a5b-bebd72b0c65e

Second: Understand how to use Computers Automation and AI to help yourself as an individual to thrive and avoid the dangers those technologies pose. Tho hash-tags: #SelfReliant(UUID91d101ea-d250-4298-9177-20880e1dded6) and #informageddon(UUID: b17ea217-e20c-4202-bb0e-954803cd2176)

#ReFiT / #FaFiT

Our life and life quality depends on how accurate our inner model of reality built by our minds since we are born reflects the actual “hard” “cold” reality. In my opinion there is nothing more important than our ability as individuals to get our inner reality model in sync with the actual common reality out there.

I have wrote two articles describing the notion of Fact Fiction and Truth (links bellow), in 2019 I would like to get a step closer to giving everyone who reads my articles a clear guide on how to produce and verify factual information by using existing technology and seek future more reliable ways to produce it.

Fact Fiction and BS (describing the notion of Fact and Fiction and the danger of consuming a mix of fact and fiction thinking is fact, a mix I call BS).

Fact Fiction and the Truth (describe the notions of Truth and Fact and how we confuse them and why it is almost impossible to detect a lie)

#SelfReliant & #informageddon

Automation is inevitable, AI is inevitable, if you think you can escape it you are fooling yourself. So, if you can’t beat them join them. However, as with any new technology there are some avoidable pitfalls.The computers and AI have some very dangerous and insidious ones. Knowing them is (in some cases) literally a life and death situation. More, it is about our future as free people or as disposable (irrelevant) salves.

There are two distinct ways this can go down.

First path is if we will try to keep the current social organization unchanged and try to graft automation and AI on it. In my opinion this is guaranteed to lead to #informageddon and irrelevant slavery, a future 99% of us will hate.

The second path (or social model) is the one where the technology of automation computers and AI are used mainly to enhance each individual in order to produce a #SelfReliant entity using a symbiosis between human and machine (carbon+silicon). Those entities will be capable of thriving in (almost) any conditions without required help from anyone else.

The society (almost as we know it) can then be rebuilt from free standing, self reliant, entities, a society which will be orders of magnitude more powerful and resilient than the current one, A society which can exists for millions or maybe billions of years. This model has the potential to lead to a social organization close to the one depicted in the SI-FI series StarTreck or even more advanced (excluding warp drives and teleportation for now) and is probably the real future incarnation of the current American-Dream.

In 2018 I’ve created and launched a Global Challenge hosted on https://timenet-systems.com in 2019 I wish to start a first iteration solution design for this challenge.

That is all!

Please live any comments via Twitter (use the Twitter button in the WordPress page).

Thank you very much for your support and feedback, have a wonderful 2019!

Initial Article Digital Timestamp Archive

Google Assistant – AI – potential astronomical RG Factor?

The English mountaineer George Mallory was asked “Why did you want to climb Mount Everest?”, he answered “Because it’s there”. Mallory died at 37 doing what he loved, climbing Everest.

I have the feeling that with AI these days we seem to have an equivalent approach “Why do we want machines to be like people?” and the answer seem to be “Because is possible!”. I really hope we will learn from history and avoid getting lost (or die) climbing the mountain of humanizing machines.

To understand the technological feat of this simple call, made by a machine, with a voice really indistinguishable from a human, you would need to open the hood of the machine learning software driving the AI application and take a look. I guarantee that 99.99% (or more) of the people on earth will have absolutely no clue on how the machine does what it did (provided all this was not just a trick).

I mean this is “rocket science” for most of people out there, so Google needs to be congratulated. Until you start to think…. (yeah thinking can be dangerous sometimes).

So, we hear a machine making a phone call to talk to a human in order to make an appointment. Both parties sounds perfectly human, but the caller is not. Let’s now think what would happen if the business also get a Google business assistant?

Now it really get’s interesting, and wired and, well … wired, and here is why. If you know a bit of computer programming you already know that you can achieve the same final outcome (your computer scheduling an appointment into another computer) by a 1000x simpler and more predictable algorithm by using simple structured text over internet connections.

What struck me is that this unavoidable future (as a businesses are more probable to get the AI before you) where machines talk to machines and pretend to be people is ludicrous. It is in fact this is an equivalent of a Rube Goldberg case of software solutions where the only reason to do it is “Because is possible!”.

For those cases, I usually like to talk about a potential measure of unnecessary complexity as the RG factor from famous (and funny) Rube Goldberg machines.

Don’t get me wrong, I’m a computer nerd and I love those things, but I’m also aware of times when we seem to loose our bearing. Thanks to the audio bellow now I’ll have to drop my home phone as a machines can now impersonate anyone and call me for insane reasons.

Should I now start to send passwords over encrypted channels first so that if my wife call’s me I should ask her the password first to make sure I’m not talking to a machine?

What’s next?

May 16, 2018 —— adding some (positive) ideas —–

The only way to try keep the RG factor low for language tools in AI is to use it exclusively for personal use. That is, you train your own natural language to your own machine interface. Both recognition of your voice and speech syntheses (your machine talks back to you in natural language) are valuable tools when it comes to AI use cases.

This means you can “grow in” your personal machine (starting from very early age and continue to improve it until you die. This will minimize the error rate for both speech recognition as well as speech synthesis leading to a flawless interface between humans and their personal machines. An there is where “personal assistant” has a lot of meaning and real use.

This approach allows us to get one step closer to mastering the many-to-many class of communication we basically are unable to engage today.  This will be done by using natural language to communicate with our personal assistants (human <-> personal machine)  and communicate between each other via structured, secure, factual machine to machine channels.

In more lay terms you talk to your machine and get things straight and clarified then your machine talks to other machines out there. The receiving end machine(s) will receive, validate, process the information let’s say to remove redundancy (things you already know) then feed you only what really matters for you and the whole community of intelligent living entities.

The future can be bright. Is all up to us to choose the right path…

TSA