You might be scratching your head right now thinking “AI….Oak Island Treasure….WTF?”
Well, I thought that too at first. However, after 3 days of careful interrogation and manipulation of AI Chatbots including ChatGPT and others, now know where the treasure is/was buried and who put it there.
Yeah right…I hear you thinking. So just bare with me.
To whet your appetite for more, and to demonstrate the true power of AI, I’m going to start out by making a bold statement, then I’m going to work backwards to prove the statement and show you how AI helped me get to the truth.
Fact – The Knight’s Templar deposited artifacts and valuables on Oak Island.
Fact – Thanks to ChatGPT and AutoGPT I’ve been able to pinpoint the location with a 98.75% certainty that the treasure was deposited there. Whether or not it’s still there to this day, I have no idea.
The route to getting at the above information was a long and circuitous one. The journey started with a theory born around 2 years ago, one that I was unable to prove or disprove, until 2023 and the advent of Chat GPT 3.5
I should also point out that Chat GPT is a mixed bag, on one hand it’s an unrivaled source of valuable research data that can be used to solve complex and multi-layered puzzles, on the other hand it can be as dumb as a rock, and, actually, quite dishonest to boot.
Before we get too far into it, we need to look a little into AI Chat and understand a little more about how it works.
From now on I’m going to use the term “AI”, meaning “artificial intelligence” to encompass all of the tools that I used in solving the Oak Island puzzle, including ChatGPT 3.5/4. Bing Chat, and AutoGPT.
So how can AI be dumb? Well, it pulls from a vast pool of data, and that vast pool of data was scraped from the Internet mostly prior to 2021. As with anything on the internet there’s both accurate and false information available on the same subject.
In most cases, ChatGPT isn’t smart enough to know whether the data it has in its resource library is right or wrong. It can make an assessment on which fact appears to be ‘more right’ based on how many times that appears in its data set versus the other ‘fact’.
That’s fine on popular topics with lots of data, not so much on niche subjects where data is sparse.
I asked AI 3 different questions to help establish its limitations in this regards:
Question One – “on which continent might I find the country of Mali?”
Answer: “the country of Mali is located on the African continent”.
Fine, lots of supporting data versus a small amount of false data on the location of Mali.
Question Two: “What are some of the discoveries made on Oak Island”
“One of the most notable was the discovery of the “Money Pit” in 1849 by the Truro Company”. This answer is false.
Question Three: “who discovered the money pit and when”
Answer: “The Money Pit was first discovered in 1795 by a young man named Daniel McGinnis, who noticed a depression in the ground and some trees with markings on them that he believed were related to buried treasure.”
This answer is correct, or at least, more likely to be true.
If you look at the above you’ll see the problem. It’s pulling ‘facts’ from a very small data set that contains conflicting data. In one answer, it’s path to data in its data set leads to the Truro Company and a discovery date of 1849. In the third answer, its data set leads to Daniel McGinnis and a date of 1795
If you continue to probe it for different ways to the same answer it will continue to provide different answers, since there isn’t a preponderance of data on the subject that points to the factual answer.
Another limitation is in its ability to accurately interpret general questions and that are then used in more complex chains of questioning.
For example, I asked AI the following:
“Here are the dimensions of a simple shape. I want you to tell me what the shape is.
Let’s start with the longest line that extends top to bottom.
Point D is at the head of the line and is far East.
Point F is at the foot of the line and is far West.
Now let’s look at the shortest line that extends left to right.
Point C is to the far North
Point B is to the far west.
The two lines are at 90 degrees to one another and they intersect at a point A.
What kind of shape am I describing?”
Its answer was: “Based on the description, the shape being described is a rectangle.”
Well, I thought I was describing a cross.
This lead to a long convoluted dialogue where I attempted to have AI help me find geometric relationships between Nolan’s Cross and important historical sites which may be connected to the Island, such as Solomon’s Temple and Rosslyn Chapel.
To try and get AI to understand a cross, and the relationship between its two perpendicular lines intersecting at their midpoints, I asked AI “in geometric terms, describe to me the cross of Jesus”.
The answer: “The cross of Jesus is commonly depicted as a Latin cross, which is a type of cross with a longer vertical line intersecting a shorter horizontal line near the top. In geometric terms, it can be described as a vertical line segment intersecting a horizontal line segment at a right angle, with the vertical line segment being longer than the horizontal line segment. The Latin cross has four right angles, with the two horizontal and vertical lines bisecting each other at the center point.
Its answer was unnecessarily complicated and served to illustrate its limitations in attributing geometric properties to simple shapes.
When I asked Bing Chat to describe a cross, its answer was far more succinct: “A cross can be geometrically represented as two perpendicular lines intersecting at their midpoints. The point where the two lines intersect is called the center of the cross. The four points on the cross are labeled as North, South, East and West”
I then gave AI dimensions for the midpoint out to each of the four points of the cross and asked it to tell me the height of the cross. A simple question that involved adding together two numbers (Midpoint to North Point + Midpoint to South Point) and it got the answer wrong.
I asked it for the width of the cross on the horizontal access and it got that wrong too. So there’s an issue with visualization of basic shapes and its ability to understand the assignment of numeric values to parts of the shape.
So where does that leave us? Well, once you know its limitations and you understand that everything must be properly fact-checked, it can still be an unbelievably useful tool.
OK, if you’re still here you must be ready for a revelation or two.
Here goes with number one.
Nolan’s Cross points to a scaled version if itself 3,977 miles away in Jerusalem. The percentage of error is:
((|44.516437° N – 44.516277° N|) / 44.516277° N) x 100% = 0.00036%
Yes, you saw that right – 0.00036% error over a distance of almost 4000 miles.
In the next episode I will walk you through the steps taken to arrive at this and we’ll look more into this ‘mirror hypothesis’.
Of course, if you hadn’t already guessed it, Nolan’s Cross points to a former site of the Temple Mount in Jerusalem with a 0.00036% inaccuracy across almost 4,000 miles.
if that sounds crazy, well trust me, it gets way crazier in the next episode as I show you how AI proved the Mirror Hypothesis.