Artificial Intelligence: What's your level of concern?

The friendliest place on the web for anyone with an RV or an interest in RVing!
If you have answers, please help by responding to the unanswered posts.
If God can clean things up, why did he allow them to get so bad?

RVers who boon-dock have already experience this on a regular basis. Based on the number who are looking to improve their connectivity I'd say, being off line is not considered the be all end all.
Free will is the short answer to the question. It is all written in the Bible. I am just now at 66 beginning to see that what was written eons ago is truth. I spent many years as an agnostic, but that has changed. Unless you study what is written and talk to others who have been studying it for years, it's very hard to see it as truth [much like a host of other truths that go against 'science' and the teachings of man].

Yes, boondockers do know about this and that's one of the reasons I'm going to build a teardrop that is boondock capable. I have a friend from another country who is buying land within a few hours of me and he will be letting me camp there occasionally. There will likely be no internet there unless he gets Starlink or another "satellite" internet service.
 
Just my opinion, but when it all comes down to it, the ability to clean an animal for food will be the highest priority there is. And I know people will tell me that you can find videos on Youtube that show you how, but when it comes down to it, Youtube won't exist anymore. Also, in this string I noticed someone mentioned only paying in cash when you go off grid. When the digital currency takes affect (maybe next year), there won't be any more cash. You can disagree with me on that last point, but that idea is floating around out there as we speak.
I figure you know that you can download any video on YT to your hard drive. I have been thinking about doing just that with that exact type of video. At some point we will experience famine here and it may not be that far off. Knowledge like that will be worth your life.

Sounds like you are on the same wavelength as I am. We are headed for the beast system at breakneck speed. Cash will be gone within the next very few years at the most. The backbone of the bank payment system [FedNow] begins in July and they have already picked a crypto based 'coin' for the payment system.
 
If you have free will, then god does not know what you will will in the future. That makes god not all-knowing.
 
And BTW, the petrodollar is dying a quick death with countries abandoning it right and left. It isn't backed by anything except oil and soon other countries will not be using it, the result is not going to be good for the US. To bring in the digital system, the current one must crash.
 
There needs to be a hard rule about AI that includes zero tolerance. That rule would be that anything which questions the existance of mankind in any way should be banned. This is something that cannot be answered by machines and should never be investigated by them.

I have often feared what if a machine becomes "aware" (#5 is alive) and becomes president. The logical thinking of a machine has great uses for mankind but not as president. A machine runs on 1 on or 0 off, and outside of that the only option is some random default. There is no way that a "aware" machine could or would be able to make decisions in that position that would be beneficial to mankind or the world.

Some talk about what is a human exactly, some say its mainly that we have different brains and a soul. And some say that the human brain can be fully functional without the body and the only reason the body would be needed is for reproduction. Oxygen, blood, and other chemicals needed for brain function can be artificially provided. So does a computer decide that mankind and communities should only be brians lined up one after another inside a laboratory, that is something that should never be tolerated.

Speaking of the bible, you can call it the bible or you can call it words or an oracle from the past. But however you think of it, i will say that the farther we get from its words, the more that anything goes and that is not good.
 
Free will is the short answer to the question. It is all written in the Bible. I am just now at 66 beginning to see that what was written eons ago is truth. I spent many years as an agnostic, but that has changed. Unless you study what is written and talk to others who have been studying it for years, it's very hard to see it as truth [much like a host of other truths that go against 'science' and the teachings of man].

Yes, boondockers do know about this and that's one of the reasons I'm going to build a teardrop that is boondock capable. I have a friend from another country who is buying land within a few hours of me and he will be letting me camp there occasionally. There will likely be no internet there unless he gets Starlink or another "satellite" internet service.
Best thing you can do is keep your opinion about whichever god or which holy book you choose to follow and what you believe to be truth for no other reason than you believe it to yourself. While otherwise there are some 300 gods and deities in circulation today and the zealots who worship each believe theirs to be the one or select few one true god(s).
 
There needs to be a hard rule about AI that includes zero tolerance. That rule would be that anything which questions the existance of mankind in any way should be banned. This is something that cannot be answered by machines and should never be investigated by them.
You're too late. Asimov thought of the same thing back in the '40's.

The Three Laws of Robotics are a set of rules devised by science fiction author Isaac Asimov. The laws were introduced in his 1942 short story “Runaround” (included in the 1950 collection I, Robot), although similar restrictions had been implied earlier stories1. The laws are as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws2.


Interesting discussion in light of the fact that I've recently been re-reading my collection of Robert Heinlein books from high school. Certain of his tales even have computers who have become self-aware and have transferred this awareness to actual flesh and blood bodies. At least his "androids" (although he doesn't use the term) are mostly young, good-looking women.
 
Last edited:
@Old_Crow You are a wealth of knowledge, never heard of him before... thanks for sharing :)
A good percentage of it useless knowledge, I'm afraid. I read a lot, I mean a LOT, but most of it is useless fiction for entertainment purposes. Asimov and Heinlein are 2 of my favorites, and they were both highly intelligent and educated men.
Heinlein was born not too far from my father's birthplace and only 2 years before my father. They shared similar backgrounds and world opinions. As I re-read his books, I can almost hear my dad speaking to me.
As far as Asimov...what can I say, the man was a genius.
 
A good percentage of it useless knowledge, I'm afraid. I read a lot, I mean a LOT, but most of it is useless fiction for entertainment purposes. Asimov and Heinlein are 2 of my favorites, and they were both highly intelligent and educated men.
Heinlein was born not too far from my father's birthplace and only 2 years before my father. They shared similar backgrounds and world opinions. As I re-read his books, I can almost hear my dad speaking to me.
As far as Asimov...what can I say, the man was a genius.
As an avid SF devotee when I was a lad I remember the shibboleth for any good SF novel being it had to be plausible. When Herbert described the still suits worn by the Freemen on Dune you had to think that makes sense.
 
You're too late. Asimov thought of the same thing back in the '40's.

The Three Laws of Robotics are a set of rules devised by science fiction author Isaac Asimov.
Yep, Asimov was interesting reading. Besides various books with the three laws, his Foundation series was also interesting. But I suspect no one currently knows how (or wishes to, if they did) to implement them.

Heinlein has been one of my favorites since I was a young teen, and his one-a-year teen novels fascinated me. He also did very careful research and tried to keep his stories accurate with respect to known (at the time) science. He also spoiled me for "lesser" writers, with his careful attention to proper grammar (except for a character's dialog), sense of humor, understanding of people, and other, harder to describe, characteristics of his writing. I have, to the best of my knowledge, all of his works (even with pen names), most of it in paperback.
 
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws2.

1/ How might AI solve the trolley problem?
2/ How might AI solve the trolley problem if it is told the single worker is the one who maintains the power grid for the AI?

1685368782298.png
 
When I think of making rules for AI I'm reminded of the phrase "What is the law" and the outcome of the movie "The Island of Dr Moreau".
 
Back
Top Bottom