• Scam Alert. Members are reminded to NOT send money to buy anything. Don't buy things remote and have it shipped - go get it yourself, pay in person, and take your equipment with you. Scammers have burned people on this forum. Urgency, secrecy, excuses, selling for friend, newish members, FUD, are RED FLAGS. A video conference call is not adequate assurance. Face to face interactions are required. Please report suspicions to the forum admins. Stay Safe - anyone can get scammed.

Is anyone else using AI to support engineering design work?

In my limited experience with ChatGPT I found it like having a conversation with a really intelligent booksmart person that can memorize and regurgitate facts about absolutely anything, and dazzle you with word salad bullshit, but possesses zero critical thinking skills, real world street smarts, or actual problem solving ability.

I'm sure the versions the general population has access to is much different than what large corporations in various industries are developing/paying for.

I would love to someday play around with AI generated cad modeling.
 
If you ask I'll list my qualifications.
I'll ask, just because it always find it interesting just how multi talented and diverse everyone on the forum appears to be.
At least in terms of background, training and work experience. OTOH we are still just a bunch of old white dudes, for the most part.
 
Ok, with a real keyboard in my hand I'll attempt to be clearer:
Current GPT-like AIs are a statistical model of text. They break down your query into tokens (roughly words, but sometimes word fragments, symbols, etc), then take that set of tokens and return the probability of each token being the next token. Then it rolls the dice to choose a likely word, appends it to the query text and does the same thing again.
The models are characterized by how many individual tokens there are (GPT3 ran about 50k tokens IIRC) and how many tokens the model can accept in a query.
It's spectacular how well the network takes these inputs and generates "meaningful" text. But there is no validation of the semantics - it's statistical retrieval.
Neither in training the model, or in evaluation of the model, is there any validation against the real world. There is no differentiation between truth and parody and outright lies. These continuations are all considered given a weighting just on the appearance of the input text.
The less supporting material there is in the training database, the more likely you are to effectively wind up with a near word-for-word dump of the input corpus.
You got decent answers to machining questions because the input corpus on machining isn't particularly polluted by misinformation, and the sections put together are effectively cribbed together from a small number of sources.
There's value there as a summarizing tool.
But the day someone mixes in some other discipline in which rake angles refer to green cheese cutters, the quality will decrease - call it dataset vandalism, and it's happening today. There is no way, short of careful selection of the training corpus, to avoid this side effect.
Regarding writing code with it, yes, I've done that - it can generate a quick boilerplate, effectively splicing together web tutorials. But that generated code is almost always incorrect, and requires significant work to massage to correctness. Sadly, the users of said code don't always recognize how flawed it is. I liken it to asking an undergraduate for a homework assignment - you'll usually get something that superficially resembles the solution, but you need to check it carefully for correctness around edge cases, even when it appears to work at first glance.
 
I'll ask, just because it always find it interesting just how multi talented and diverse everyone on the forum appears to be.
The quick summary is that I've worked on Graphics Processing Units (GPUs) since before there were GPUs, with a focus on hardware/software co-design. I led software for a GPU product at Intel in the late 2000's/early 2010's (can you make a GPU out of a bunch of x86 cores on a ring bus? Yes, you can, and you need a ton of software and some rather interesting architectural extensions which became the AVX512 instruction set extensions). Did a bunch of work on VR and AR, mostly at Google, and for the last few years was at Nvidia where part of my work was GPU changes to support multiple interactive users on a single GPU, for largescale datacenter deployments.
The thing about working on GPUs the last 10 years is that it has become impossible to ignore the machine learning/AI applications - they are now much more important on that hardware than the "graphics" part of the "graphics processing unit".
I've built systems and written papers that use neural models to solve graphics problems as well as various control problems in datacenter management.
It's all good fun, until the bubble pops and people realize that without some form of semantic validation all you have with the ChatGPTs are BS generators. The semantic validation side is hard - that's what classical AI researchers have been doing for 60 years. Applying or re-inventing it for GPT validation/inclusion is a major open problem and much harder than generating likely text.
 
I started dabbling with ChatGPT but have kind of migrated to Gemini. Nothing serious yet, just messing around. Some things it does very well, others...not so much. I think this stuff is still in its infancy & AI is already a maligned marketing term. But specific to these apps I'm generally impressed with some aspects. Like text based tools - summarize this, or distill that, or rewrite this from a particular perspective. Some of my programming friends use it a lot to debug, improve code or look things up, just quicker. I know enough VBA to be dangerous so I pasted some code with planted errors. It not only spotted them, but provided me background information & better recommendations. Google search would have directed me to Bust Buy to buy a new PC. So despite early days, it's really quite exciting, especially for something that is free. But it also doesn't take much to confuse it.

I'm sure there are lots of AI books out there, but FWIW I just finished this on Audible & it was pretty good: A Brief History of Artificial Intelligence What It Is, Where We Are, and Where We Are Going By: Michael Wooldridge.
For an interesting interview with Dr. Wooldridge see
 
Ok, with a real keyboard in my hand I'll attempt to be clearer:
Current GPT-like AIs are a statistical model of text. They break down your query into tokens (roughly words, but sometimes word fragments, symbols, etc), then take that set of tokens and return the probability of each token being the next token. Then it rolls the dice to choose a likely word, appends it to the query text and does the same thing again.
The models are characterized by how many individual tokens there are (GPT3 ran about 50k tokens IIRC) and how many tokens the model can accept in a query.
It's spectacular how well the network takes these inputs and generates "meaningful" text. But there is no validation of the semantics - it's statistical retrieval.
Neither in training the model, or in evaluation of the model, is there any validation against the real world. There is no differentiation between truth and parody and outright lies. These continuations are all considered given a weighting just on the appearance of the input text.
The less supporting material there is in the training database, the more likely you are to effectively wind up with a near word-for-word dump of the input corpus.
You got decent answers to machining questions because the input corpus on machining isn't particularly polluted by misinformation, and the sections put together are effectively cribbed together from a small number of sources.
There's value there as a summarizing tool.
But the day someone mixes in some other discipline in which rake angles refer to green cheese cutters, the quality will decrease - call it dataset vandalism, and it's happening today. There is no way, short of careful selection of the training corpus, to avoid this side effect.
Regarding writing code with it, yes, I've done that - it can generate a quick boilerplate, effectively splicing together web tutorials. But that generated code is almost always incorrect, and requires significant work to massage to correctness. Sadly, the users of said code don't always recognize how flawed it is. I liken it to asking an undergraduate for a homework assignment - you'll usually get something that superficially resembles the solution, but you need to check it carefully for correctness around edge cases, even when it appears to work at first glance.
Thanks Paul.
At the same time, both more and less than what people think it is.
 
I'm going to start using ChatGPT more often when google searches are not helpful. For example, I'm searching timing pulleys and it was not obvious to me how AF vs. BF style HTD pulleys differ, so I asked.

AF vs BF.webp
 
I'm going to start using ChatGPT

I've found that too!

Google has become so useless that I've all but abandoned it. If you ask google your question, you used to get decent info including a link to the source of the info chatgtp found for you. Now all you get is advertising, useless YouTube videos, and other sites that generate revenue.

I've switched browsers to my Samsung browser and started using Chatgtp to filter the crap for me. I'm sure it won't last long but for now it's a big improvement!

Edit - I should also add that our members are the best search ever! You can ask them ANYTHING and someone will know!
 
Last edited:
Example might appeal to people who want a text based synopsis of YouTube video (using Gemini on random Clough42 video). This one is pretty short & subject orientated but one could appreciate the power on a longer, more detailed video. Not bad for 2 seconds of processing.

1717177914296.webp
1717178011327.webp

1717178044101.webp
1717178083175.webp

(just grabbed a random Clough42 video as example)
 
Example might appeal to people who want a text based synopsis of YouTube video (using Gemini on random Clough42 video).

Cool Peter. Never thought of summarizing a YouTube video that way!

Should I assume that Gemini requires subtitles on the video?
 
I've found that too!

Google has become so useless that I've all but abandoned it. If you ask google your question, you used to get decent info including a link to the source of the info chatgtp found for you. Now all you get is advertising, useless YouTube videos, and other sites that generate revenue.

I've switched browsers to my Samsung browser and started using Chatgtp to filter the crap for me. I'm sure it won't last long but for now it's a big improvement!

Edit - I should also add that our members are the best search ever! You can ask them ANYTHING and someone will know!
Yesterday we attended McGill Engineering Convocation, one of the speeches was by Shoshana Zuboff.
She provided a pretty dire warning to the graduating class on how the masses are being manipulated by a small group of unelected (Alphabet, Meta etc.) it was an interesting speech. Meshes well with your comment.

 
Back
Top