• Scam Alert. Members are reminded to NOT send money to buy anything. Don't buy things remote and have it shipped - go get it yourself, pay in person, and take your equipment with you. Scammers have burned people on this forum. Urgency, secrecy, excuses, selling for friend, newish members, FUD, are RED FLAGS. A video conference call is not adequate assurance. Face to face interactions are required. Please report suspicions to the forum admins. Stay Safe - anyone can get scammed.
  • Several Regions have held meetups already, but others are being planned or are evaluating the interest. The Calgary Area Meetup is set for Saturday July 12th at 10am. The signup thread is here! Arbutus has also explored interest in a Fraser Valley meetup but it seems members either missed his thread or had other plans. Let him know if you are interested in a meetup later in the year by posting here! Slowpoke is trying to pull together an Ottawa area meetup later this summer. No date has been selected yet, so let him know if you are interested here! We are not aware of any other meetups being planned this year. If you are interested in doing something in your area, let everyone know and make it happen! Meetups are a great way to make new machining friends and get hands on help in your area. Don’t be shy, sign up and come, or plan your own meetup!

Anyone using ChatGTP?

Do you find its super-duper better for your particular applications (to justify 28 CAD/month).

I am on roughly the same page as @Arbutus. I bought a trial membership first and was so blown away by the time savings and the quality of the research, that I popped for the annual subscription.

I love that it will write a full report and give citations for the sources so I can check them as I feel so fit. I wish I had found research material so easily 50 years ago.

Yes, for me anyway, it's worth the annual fee. What is the rest of my life worth? Why waste a single moment of that time researching things that Gemini can do for me? I'm also pretty darn good at researching things. But I confess that I'm not that good compared to Gemini Pro. It's amazing. Not much else to say.
 
I am on roughly the same page as @Arbutus. I bought a trial membership first and was so blown away by the time savings and the quality of the research, that I popped for the annual subscription.

I love that it will write a full report and give citations for the sources so I can check them as I feel so fit. I wish I had found research material so easily 50 years ago.

Yes, for me anyway, it's worth the annual fee. What is the rest of my life worth? Why waste a single moment of that time researching things that Gemini can do for me? I'm also pretty darn good at researching things. But I confess that I'm not that good compared to Gemini Pro. It's amazing. Not much else to say.

My view has steadily gone downhill. It lies, hallucinates, over promises and just makes stuff up. Often the citations are bogus and when pressed it admits it.

Delving into what it is and isn't, it is not intelligent, it does not think, plan, design, test or even have much persistent memory (not data storage, but memory of gist of the discussion/problem/code etc).... it simply calculates what is the next best "token" to spit out. Now it does have a huge amount "training" so can generate impressive things, but its still just a probability engine with a large language model. patterns. It takes in everything you said and spits something out - but its full of errors. And when work with it to correct one, it does not edit the previious one. You cannot rely it to makes changes as it is not editing a document, its putting updated notes through the probability engine and spitting out a new doc - that could have lots of crap changed.

Its great for broad brush answers which there be can ambiguity and generalizations - like "outline a digital marketing plan for such and such a business." The fails start when you ask it get more and more specific.

Case in point. For two days now I spent (wasted) time with it to create a 5 page process to successfully install some open source software on a linux server. Well trodden ground and open source. Should be child's play right? Up to version 22 and it still doesn't work. I realized last night that it was creating errors that didn't exist in previous versions. It would suggest doing A to solve X. Then we have problem Y. It does B to solve Y but forgets solution A and creates problem X again three versions later. Tell it fix X and Y comes back. Try monitoring and avoiding those problems! Infuriating.

It doesn't matter so much on a 5 page essay about Napoleon's campaign, that is interpretive, shades of grey. But it sure matters on a server stack where it works or it doesn't! On the whole, outside of general prose (where you still better edit before sending), it's performance has been dismal. Same with Gemini and Claude - which are so similar to use i wonder they aren't really the same thing under the hood.

If it can't do that, install a piece of software, well, my fears of it taking over are belayed. And as for AGI, general intelligence, is currently unknown how to achieve it or even if it's impossible. The current belief is it cannot/will not be AI with more HP.... AGI is more like a yet undiscovered new species.

I'm increasing of the view that AI is mania; following on tulip bulbs, dot.com, subprime mortgage bonds and crypto currencies. Temper pedic mattress's advertising tells me the mattress has AI and good luck getting a tech start funded if AI is prominent in the pitch. Meanwhile clueless CEO's are telling teams to get on board with AI if you want to work here - who have never used it and are clueless about its strengths and weaknesses. A mania I say, complete bold face lies (WTF, we're not suppose to take what you say literally? see below . ... on the heels of its CEO telling us 5 thinks better than ever. Lies!) whose greatest potential is to dumb down a generation of students using it to do their school work.


Clipboard Image.jpg

Clipboard Image (1).jpg
 
Last edited:
That has been my experience thus far. Some things it spits out are really amazing & spot on. The danger is getting mesmerized by the magic & assuming this level of performance is the same across other subjects. When you point out an outright fib or incorrect generalization, it has some canned excuses. It would be better if it showed a running confidence metric like 2/10 or 9/10 but that might reveal something the principals aren't that eager to convey. I use (the freebie) AI tools as a more efficient internet search mechanism, basically what Google SHOULD be without the obvious steering/filtering to commercial sites which is much bigger waste of personal time. Eventually the AI chat gets to a point where you can say 'please substantiate this conclusion with links / references' which it either coughs up or doesn't which is revealing in itself. But if the link is useful, usually I find it was net quicker to have landed on that & extracted the relevant text vs. doing typical online search which is about as productive as panning for gold - commercial ads being the piles useless gravel.

I think it's generally over-hyped but also we are at the infancy stage. People love to extrapolate the future curve, especially media sources who make their money by grabbing attention. Whether its remotely accurate or viable is not even an consideration, never mind an obligation. But if you went back to the equivalent early days of early internet, I bet very few people would correctly predicted the capability AND timing coordinates accurately. The other 98% herd would have guessed wrong in hindsight, including some companies that should have / could have capitalized & blew it big time. But hindsight is 20/20, most all would agree it exploded in capability & implementation by most estimates.

courtesy of Gemini LOL
1755271526655.png
 
I guess I can safely say that I have found that almost any AI is 50x better than watching the U-Tube videos that pop up on a routine Google search for whatever. It is also teaching me to ask better questions.

Bell just offered me 1 yr of perplexity pro free.
 
My view has steadily gone downhill. It lies, hallucinates, over promises and just makes stuff up. Often the citations are bogus and when pressed it admits it.

Delving into what it is and isn't, it is not intelligent, it does not think, plan, design, test or even have much persistent memory (not data storage, but memory of gist of the discussion/problem/code etc).... it simply calculates what is the next best "token" to spit out. Now it does have a huge amount "training" so can generate impressive things, but its still just a probability engine with a large language model. patterns. It takes in everything you said and spits something out - but its full of errors. And when work with it to correct one, it does not edit the previious one. You cannot rely it to makes changes as it is not editing a document, its putting updated notes through the probability engine and spitting out a new doc - that could have lots of crap changed.

Its great for broad brush answers which there be can ambiguity and generalizations - like "outline a digital marketing plan for such and such a business." The fails start when you ask it get more and more specific.

Case in point. For two days now I spent (wasted) time with it to create a 5 page process to successfully install some open source software on a linux server. Well trodden ground and open source. Should be child's play right? Up to version 22 and it still doesn't work. I realized last night that it was creating errors that didn't exist in previous versions. It would suggest doing A to solve X. Then we have problem Y. It does B to solve Y but forgets solution A and creates problem X again three versions later. Tell it fix X and Y comes back. Try monitoring and avoiding those problems! Infuriating.

It doesn't matter so much on a 5 page essay about Napoleon's campaign, that is interpretive, shades of grey. But it sure matters on a server stack where it works or it doesn't! On the whole, outside of general prose (where you still better edit before sending), it's performance has been dismal. Same with Gemini and Claude - which are so similar to use i wonder they aren't really the same thing under the hood.

If it can't do that, install a piece of software, well, my fears of it taking over are belayed. And as for AGI, general intelligence, is currently unknown how to achieve it or even if it's impossible. The current belief is it cannot/will not be AI with more HP.... AGI is more like a yet undiscovered new species.

I'm increasing of the view that AI is mania; following on tulip bulbs, dot.com, subprime mortgage bonds and crypto currencies. Temper pedic mattress's advertising tells me the mattress has AI and good luck getting a tech start funded if AI is prominent in the pitch. Meanwhile clueless CEO's are telling teams to get on board with AI if you want to work here - who have never used it and are clueless about its strengths and weaknesses. A mania I say, complete bold face lies (WTF, we're not suppose to take what you say literally? see below . ... on the heels of its CEO telling us 5 thinks better than ever. Lies!) whose greatest potential is to dumb down a generation of students using it to do their school work.


View attachment 68674

View attachment 68675

Agree with you 100%

AI is a lot like a guy I once worked with that got the nickname Google. He had all the answers but on a very superficial level and not always right.
 
Last edited:
I guess I can safely say that I have found that almost any AI is 50x better than watching the U-Tube videos that pop up on a routine Google search for whatever. It is also teaching me to ask better questions.

Bell just offered me 1 yr of perplexity pro free.

Maybe the current version of AI is better suited for people who have already developed a knowledge base the old fashioned way - years of studying, struggling to get the answers, making lots of mistakes and learning from them, etc.

Like the old saying goes, "the tool is only as good as the person using it".

I will continue to experiment with it but with the same level of caution I treat all information found on the internet.

Tried out Claude while trying to get some software to work. In the end, reading the online manual for the 5th time solved the problem.
 
Maybe the current version of AI is better suited for people who have already developed a knowledge base the old fashioned way - years of studying, struggling to get the answers, making lots of mistakes and learning from them, etc.
I think that is a correct observation. Gemini and ChatGPT are probability engines working with a huge data resource, but without clear direction and the challenge/correction part of the conversation, the LLM will answer your question with the most probable response.

It really doesnt know why you would need to use 4140 steel for that part you are designing unless there are specific examples in its training which contain the same key words and theme. If it does not find an answer directly, I have found that the LLMs hallucinate to try to generate any answer. Unless the user has enough knowledge and experience, those hallucinated answers are often accepted without challenge.

As others on this forum have mentioned, the LLM's ability to maintain a conversation thread is very limited. Gemini2.5 currently offers a context window of 1million tokens. A massive token window allows the model to process and reason about huge amounts of information in a single go—equivalent to an entire novel, a large codebase, or hours of video—without forgetting the beginning of the document by the time it gets to the end. This makes an enormous difference when coding a large application for example.

The AI marketing hype was probably written by MBAs (Masters of Bugger All) using LLMs.
 
Back
Top