Something Big is Happening in AI [article]

7,297 Views | 109 Replies | Last: 1 day ago by jh0400
HECUBUS
How long do you want to ignore this user?
AG
Back when computers became widespread, knowing how to use word processors, spreadsheets, etc. was the threshold for employment. AI is just better computer tools. Pretty sure the world was already destroyed by smartphones and video games. EVs still scare many.
Proposition Joe
How long do you want to ignore this user?
We're going to see mass job replacement from two different fronts --> AI (replacing pretty much any junior analyst, accounting, financial planners, etc...) but also automation/robotics.

Similar industries that have overlap, but both advancing at a significant rate and much of it getting accelerated due to things like the raise in minimum wage.

Picture your average fast food restaurant. On average it employs 6-7 people during one shift (which is a low estimate).

The ordering and checkout process will soon be completely automated. The cooking process at a place like McDonalds is almost completely automated, the cheapness of minimum wage employees is the only real thing that has kept the massive shift from taking place.

Take all of those fast food restaurants down to just 1 employee - there to oversee things for the most part.

We're there right now - it's just a matter of when the investment makes more financial sense than the cheap employment.
MemphisAg1
How long do you want to ignore this user?
AG
fulshearAg96 said:

TXTransplant said:

Serious question - I'm an engineer but not a computer person. My background is chemistry and a little biology.

But in biology, it's known that the more times a cell replicates, the more likely there is to be an error in that replication. As those errors propagate, you end up with mutations that manifest as things like cancer. This is really the heart of what we know as aging and why a lot of diseases are age related.

What's keeping the same thing from happening with digital data? We know data becomes corrupted over time. Computer systems don't last forever.

It's fundamentally the Second Law of Thermodynamics - entropy (disorder) is constantly increasing.

The miracle of human life is that two unique individuals come together to create more completely unique individuals. And we tend to do it when we are young (ie, less errors with replication).

But how does an AI program do this? Won't its data eventually become too corrupt to use causing it to effectively "die"?


data governance.

In my 35 year career this is one area that I have seen as pitiful across two Fortune 500 companies and a smaller one. All the execs talk big about data quality, but it is never resourced and managed intensively enough to get it right. Even to this day as I'm approaching retirement, I'm working with a young engineer on a project and we are still having to validate data manually and with spreadsheets. The CEO and others don't trust what comes out of the system and for good reason... much of the data is flawed.

These are all good companies I've worked for. Very successful financially. One's been a going concern for over 125 years. These are not fly-by-night rednecks who's primary tool is duct tape. Yet they all fall woefully short. The key reason they never achieved 100% (or 99%) data integrity is they never invested in the human resources it takes to vet things fully due to the constant pressure on headcount expense.

And now we're supposed to believe that companies will step up with the human investments needed to achieve 99%+ data integrity in an environment where they're supposed to be eliminating heads because of AI?

I'm not buying it.
AgsMyDude
How long do you want to ignore this user?
AG
bagger05 said:

PeekingDuck said:

AgCMT said:

It's coming. The only limitations are going to be the resources to maintain it.

This is the most important part of the whole equation. There's already a load gap and I'm not exactly sure how we solve it in the near term.

Agreed on this as well. At some point the computer science problems go away and we are left with mechanical engineering and thermodynamics problems. If everyone in the world was using these tools then where's the energy going to come from?

This might be the brake that slows all of this down.


The computer science problems aren't "going away", at least anytime soon

- signed, a software engineer with a computer science degree who leverages AI heavily, daily, to solve real world business problems.
fulshearAg96
How long do you want to ignore this user?
AG
MemphisAg1 said:

fulshearAg96 said:

TXTransplant said:

Serious question - I'm an engineer but not a computer person. My background is chemistry and a little biology.

But in biology, it's known that the more times a cell replicates, the more likely there is to be an error in that replication. As those errors propagate, you end up with mutations that manifest as things like cancer. This is really the heart of what we know as aging and why a lot of diseases are age related.

What's keeping the same thing from happening with digital data? We know data becomes corrupted over time. Computer systems don't last forever.

It's fundamentally the Second Law of Thermodynamics - entropy (disorder) is constantly increasing.

The miracle of human life is that two unique individuals come together to create more completely unique individuals. And we tend to do it when we are young (ie, less errors with replication).

But how does an AI program do this? Won't its data eventually become too corrupt to use causing it to effectively "die"?


data governance.

In my 35 year career this is one area that I have seen as pitiful across two Fortune 500 companies and a smaller one. All the execs talk big about data quality, but it is never resourced and managed intensively enough to get it right. Even to this day as I'm approaching retirement, I'm working with a young engineer on a project and we are still having to validate data manually and with spreadsheets. The CEO and others don't trust what comes out of the system and for good reason... much of the data is flawed.

These are all good companies I've worked for. Very successful financially. One's been a going concern for over 125 years. These are not fly-by-night rednecks who's primary tool is duct tape. Yet they all fall woefully short. The key reason they never achieved 100% (or 99%) data integrity is they never invested in the human resources it takes to vet things fully due to the constant pressure on headcount expense.

And now we're supposed to believe that companies will step up with the human investments needed to achieve 99%+ data integrity in an environment where they're supposed to be eliminating heads because of AI?

I'm not buying it.

I agree with your comments. But data quality is not the same as data governance.

If your working with a 100 year old company then yea, manual intervention will be necessary no doubt and data quality likely an issue - numerous systems of records, 10 different ways to document the same part, no master record, etc.

Governance is the application of ownership, rules, process, etc. to the data.

And data integrity is all about confidence in the data... can you trust it... has it been changed or corrupted. If it is poor data quality coming in and the same poor data quality coming out you in theory have good data integrity.
MemphisAg1
How long do you want to ignore this user?
AG
fulshearAg96 said:

MemphisAg1 said:

fulshearAg96 said:

TXTransplant said:

Serious question - I'm an engineer but not a computer person. My background is chemistry and a little biology.

But in biology, it's known that the more times a cell replicates, the more likely there is to be an error in that replication. As those errors propagate, you end up with mutations that manifest as things like cancer. This is really the heart of what we know as aging and why a lot of diseases are age related.

What's keeping the same thing from happening with digital data? We know data becomes corrupted over time. Computer systems don't last forever.

It's fundamentally the Second Law of Thermodynamics - entropy (disorder) is constantly increasing.

The miracle of human life is that two unique individuals come together to create more completely unique individuals. And we tend to do it when we are young (ie, less errors with replication).

But how does an AI program do this? Won't its data eventually become too corrupt to use causing it to effectively "die"?


data governance.

In my 35 year career this is one area that I have seen as pitiful across two Fortune 500 companies and a smaller one. All the execs talk big about data quality, but it is never resourced and managed intensively enough to get it right. Even to this day as I'm approaching retirement, I'm working with a young engineer on a project and we are still having to validate data manually and with spreadsheets. The CEO and others don't trust what comes out of the system and for good reason... much of the data is flawed.

These are all good companies I've worked for. Very successful financially. One's been a going concern for over 125 years. These are not fly-by-night rednecks who's primary tool is duct tape. Yet they all fall woefully short. The key reason they never achieved 100% (or 99%) data integrity is they never invested in the human resources it takes to vet things fully due to the constant pressure on headcount expense.

And now we're supposed to believe that companies will step up with the human investments needed to achieve 99%+ data integrity in an environment where they're supposed to be eliminating heads because of AI?

I'm not buying it.

I agree with your comments. But data quality is not the same as data governance.

If your working with a 100 year old company then yea, manual intervention will be necessary no doubt and data quality likely an issue - numerous systems of records, 10 different ways to document the same part, no master record, etc.

Governance is the application of ownership, rules, process, etc. to the data.

And data integrity is all about confidence in the data... can you trust it... has it been changed or corrupted. If it is poor data quality coming in and the same poor data quality coming out you in theory have good data integrity.

I'm using the terms interchangeably because you can't have data quality without excellent data governance. And it's a lack of resource commitment to data governance that I was referring to in my earlier post. The execs of those companies are good people and like to think they've made all these investments in data and systems, when in reality, they don't resource it with the amount of people and processes it takes to ensure excellent data governance, and by extension, data quality.

At the level they make decisions, the data doesn't have to be perfect when you're dealing with billion dollar decisions. You can often get by with "rounding errors." But not at the operational level, where data quality is essential if you're going to lean on systems and programs to automate a lot of things. There are some companies like Amazon who have invested in data governance, and it shows in the quality of their execution across a very complicated space.

But a lot of traditional, non-tech companies are still way behind on data governance and quality. AI will be very impactful to improve productivity across a number of fronts. We're already using it at work for the lower hanging fruit on the tree, but the complicated stuff is going to take an investment in data governance to have confidence that AI is producing consistently accurate output.
fulshearAg96
How long do you want to ignore this user?
AG
MemphisAg1 said:

fulshearAg96 said:

MemphisAg1 said:

fulshearAg96 said:

TXTransplant said:

Serious question - I'm an engineer but not a computer person. My background is chemistry and a little biology.

But in biology, it's known that the more times a cell replicates, the more likely there is to be an error in that replication. As those errors propagate, you end up with mutations that manifest as things like cancer. This is really the heart of what we know as aging and why a lot of diseases are age related.

What's keeping the same thing from happening with digital data? We know data becomes corrupted over time. Computer systems don't last forever.

It's fundamentally the Second Law of Thermodynamics - entropy (disorder) is constantly increasing.

The miracle of human life is that two unique individuals come together to create more completely unique individuals. And we tend to do it when we are young (ie, less errors with replication).

But how does an AI program do this? Won't its data eventually become too corrupt to use causing it to effectively "die"?


data governance.

In my 35 year career this is one area that I have seen as pitiful across two Fortune 500 companies and a smaller one. All the execs talk big about data quality, but it is never resourced and managed intensively enough to get it right. Even to this day as I'm approaching retirement, I'm working with a young engineer on a project and we are still having to validate data manually and with spreadsheets. The CEO and others don't trust what comes out of the system and for good reason... much of the data is flawed.

These are all good companies I've worked for. Very successful financially. One's been a going concern for over 125 years. These are not fly-by-night rednecks who's primary tool is duct tape. Yet they all fall woefully short. The key reason they never achieved 100% (or 99%) data integrity is they never invested in the human resources it takes to vet things fully due to the constant pressure on headcount expense.

And now we're supposed to believe that companies will step up with the human investments needed to achieve 99%+ data integrity in an environment where they're supposed to be eliminating heads because of AI?

I'm not buying it.

I agree with your comments. But data quality is not the same as data governance.

If your working with a 100 year old company then yea, manual intervention will be necessary no doubt and data quality likely an issue - numerous systems of records, 10 different ways to document the same part, no master record, etc.

Governance is the application of ownership, rules, process, etc. to the data.

And data integrity is all about confidence in the data... can you trust it... has it been changed or corrupted. If it is poor data quality coming in and the same poor data quality coming out you in theory have good data integrity.

I'm using the terms interchangeably because you can't have data quality without excellent data governance. And it's a lack of resource commitment to data governance that I was referring to in my earlier post. The execs of those companies are good people and like to think they've made all these investments in data and systems, when in reality, they don't resource it with the amount of people and processes it takes to ensure excellent data governance, and by extension, data quality.

At the level they make decisions, the data doesn't have to be perfect when you're dealing with billion dollar decisions. You can often get by with "rounding errors." But not at the operational level, where data quality is essential if you're going to lean on systems and programs to automate a lot of things. There are some companies like Amazon who have invested in data governance, and it shows in the quality of their execution across a very complicated space.

But a lot of traditional, non-tech companies are still way behind on data governance and quality. AI will be very impactful to improve productivity across a number of fronts. We're already using it at work for the lower hanging fruit on the tree, but the complicated stuff is going to take an investment in data governance to have confidence that AI is producing consistently accurate output.

I see what you are saying. i'd challange the comment around non tech being behind on data governance. but that would be a much longer conversation and when I say 'challange' im not trying to get into a pssing match ... just suggesting that we probably have two different definitions that result in two different opinions on non tech being behind on governance
Stan Crowch
How long do you want to ignore this user?
AG
Looks like OpenClaw is partnering up with Open AI.
bagger05
How long do you want to ignore this user?
AG
Announcement:

https://steipete.me/posts/2026/openclaw


I think this is going to mean this is going to accelerate everything we are seeing happening right now.
Stan Crowch
How long do you want to ignore this user?
AG
These tech companies have thrown so much money at AI assistants and the breakthrough concept comes from some guy messing around with a side project. It's really incredible. Can't imagine the numbers they threw at him.
Heineken-Ashi
How long do you want to ignore this user?
Fun experiment..



go count how many "it's not this, it's THAT" tropes were used in that post.

The answer is 31.

Until AI can write without sounding like it was trained by watching an Allen Iverson rant, I'll continue to believe it has a much farther way to go than the promotional scare tactics that are begging people to feed it more data.

$30,000 Millionaire
How long do you want to ignore this user?
AG
bagger05 said:

Announcement:

https://steipete.me/posts/2026/openclaw


I think this is going to mean this is going to accelerate everything we are seeing happening right now.


Now it's going to suck. Claude is so much better than the crap open AI produces. Sam Altman ugh.
bagger05
How long do you want to ignore this user?
AG
Kinda surprised he went with OpenAI. Seems to me like Anthropic has all the momentum right now and I assume they talked to him as well. Maybe OpenAI gave him an offer he couldn't turn down.
Stan Crowch
How long do you want to ignore this user?
AG
Anthropic sent him a cease and desist letter when he initially named it "Clawd". They dropped the ball on this.
YouBet
How long do you want to ignore this user?
AG
bagger05 said:

Kinda surprised he went with OpenAI. Seems to me like Anthropic has all the momentum right now and I assume they talked to him as well. Maybe OpenAI gave him an offer he couldn't turn down.


It's also interesting because OpenAI is behind Anthropic on monetizing AI because they initially focused on consumer subs. They are playing catch up to Anthropic's corporate strategy.

On the surface, this doesn't seem to help them with monetization if this dude wants to keep it open sourced and for fun, but we also don't know the particulars of this deal.

Regardless, I'm not a fan of Altman - he seems like a bad actor to me.
bagger05
How long do you want to ignore this user?
AG
Yes. Altman kinda creeps me out.
Tex117
How long do you want to ignore this user?
AG
I mean, having truly spent time with it this year...using it as a tool....

Eff man. AI in large part is going to take over. All white collar jobs are simply going to be "managing AI."

Today's winner for the General Board Burrito Lottery is:

Tex117
Foamcows
How long do you want to ignore this user?
AG
Having spent the last 10 years in the heart of one of the major players, and per our internal metrics being in the top 1% of employees using AI based on interaction count, here's my 2 cents for what it's worth.

Thanks to AI, my output has increased 10x when I measure my first 9 years in the company versus the past year. The AI tools finally had access to the context needed to help me, and they reached a level of comprehension where they became a net time saver. Once those 2 bars were met, the choice to use it became a bit of a no brainer.

Currently, I can write code in nearly any language without much impact to how fast I can build. Before, once I left my bubble, I could do the work but it was slow and required a ton of throwing things at the wall to see what would stick. Now I just say "take this code (in any language), change it to do X/Y/Z," have it write a test to validate the behavior, walk away, come back a half hour later and it's done.

My entire workflow has changed in the past 6 months. I now spend more time building out the testing and validation solutions (using AI) and very little time on the actual solution. I have very well defined style guides for how the code should look and feel to help make it consistent with what we have, so it's easy for others to come behind and understand. The reason I spend more time on the test than the solution is I can setup a test, tell AI to make a solution, walk away, and let it churn in the background. It will bounce around trying different things while I'm off doing the ten other things that I previously had in my backlog and couldn't get to.

On a pure lines of code basis, I'm pushing out about 10x the volume than I was before AI. Which is great. I'm able to get more things done. However, it's coming at a cost.

My coding skills are dulling quickly. I'm extremely efficient now at using AI to build, but if I were to interview for my current job it would be a struggle to pass the coding exam as I haven't written any code in 6 months. Another downside is that the solutions I've made using AI I really don't know very well. I have a high level idea what they do, but not much more because I didn't make them. Previously, after having made a solution, I knew the ins and outs and everything about it. Now I'm as familiar with the pieces of code that I used AI to make as I would be with a piece of code my colleagues wrote.

The other thing I noticed: when I complete a project, because I didn't spend that much time getting my hands dirty with the data, I skipped the part of the process where I would get ideas on what else we could do in the space, or what we could do next, or things that would be nice to have that I decide to go ahead and do because I'm already in there. None of that happens now. When I finish something using AI, I no longer have a long list of brainstorm ideas for the next version.

Where do I see AI going? Right now, as a developer, AI is not well integrated into our workflows. In the future, I expect there will be an AI section of our code repo that AI stores notes and other context it needed to efficiently work in a shared space where many developers are updating the same code package. Right now, none of this framework exists natively, but in the next year that's where I see the opportunity.

I can already see some of this with WebMCP, which will be the way that AI is able to natively interact with websites without having to use a UI like we do (https://developer.chrome.com/blog/webmcp-epp). This is the next opportunity... building out ways for your website to integrate with AI tools. The faster you can make your service "AI compatible," the faster those using AI will adopt your tool as the one they use.

Do I think AI is going to take away my job? Probably not mine, because I'm currently using it to surf the wave and do things I couldn't do before at much faster speeds. However, those that aren't spending the time developing their skills to be able to go faster using AI will be left behind and might not be eliminated, but might not measure up to their colleagues as well.

(sorry for all the edits, it seems my ability to write is suffering as well)
bagger05
How long do you want to ignore this user?
AG
Very thoughtful and insightful.

I think your comment about how "white collar jobs will just be managing AI" is interesting. 40 years ago someone would have probably said how "white collar jobs would just be working on computers."

Don't remember who said it on this thread, but we are quickly approaching the point where "I don't really use AI" will be like "I prefer faxing people to email."
GeorgiAg
How long do you want to ignore this user?
AG
Foamcows said:

Having spent the last 10 years in the heart of one of the major players, and per our internal metrics being in the top 1% of employees using AI based on interaction count, here's my 2 cents for what it's worth.

Thanks to AI, my output has increased 10x when I measure my first 9 years in the company versus the past year. The AI tools finally had access to the context needed to help me, and they reached a level of comprehension where they became a net time saver. Once those 2 bars were met, the choice to use it became a bit of a no brainer.

Currently, I can write code in nearly any language without much impact to how fast I can build. Before, once I left my bubble, I could do the work but it was slow and required a ton of throwing things at the wall to see what would stick. Now I just say "take this code (in any language), change it to do X/Y/Z," have it write a test to validate the behavior, walk away, come back a half hour later and it's done.

My entire workflow has changed in the past 6 months. I now spend more time building out the testing and validation solutions (using AI) and very little time on the actual solution. I have very well defined style guides for how the code should look and feel to help make it consistent with what we have, so it's easy for others to come behind and understand. The reason I spend more time on the test than the solution is I can setup a test, tell AI to make a solution, walk away, and let it churn in the background. It will bounce around trying different things while I'm off doing the ten other things that I previously had in my backlog and couldn't get to.

On a pure lines of code basis, I'm pushing out about 10x the volume than I was before AI. Which is great. I'm able to get more things done. However, it's coming at a cost.

My coding skills are dulling quickly. I'm extremely efficient now at using AI to build, but if I were to interview for my current job it would be a struggle to pass the coding exam as I haven't written any code in 6 months. Another downside is that the solutions I've made using AI I really don't know very well. I have a high level idea what they do, but not much more because I didn't make them. Previously, after having made a solution, I knew the ins and outs and everything about it. Now I'm as familiar with the pieces of code that I used AI to make as I would be with a piece of code my colleagues wrote.

The other thing I noticed: when I complete a project, because I didn't spend that much time getting my hands dirty with the data, I skipped the part of the process where I would get ideas on what else we could do in the space, or what we could do next, or things that would be nice to have that I decide to go ahead and do because I'm already in there. None of that happens now. When I finish something using AI, I no longer have a long list of brainstorm ideas for the next version.

Where do I see AI going? Right now, as a developer, AI is not well integrated into our workflows. In the future, I expect there will be an AI section of our code repo that AI stores notes and other context it needed to efficiently work in a shared space where many developers are updating the same code package. Right now, none of this framework exists natively, but in the next year that's where I see the opportunity.

I can already see some of this with WebMCP, which will be the way that AI is able to natively interact with websites without having to use a UI like we do (https://developer.chrome.com/blog/webmcp-epp). This is the next opportunity... building out ways for your website to integrate with AI tools. The faster you can make your service "AI compatible," the faster those using AI will adopt your tool as the one they use.

Do I think AI is going to take away my job? Probably not mine, because I'm currently using it to surf the wave and do things I couldn't do before at much faster speeds. However, those that aren't spending the time developing their skills to be able to go faster using AI will be left behind and might not be eliminated, but might not measure up to their colleagues as well.

(sorry for all the edits, it seems my ability to write is suffering as well)

This sounds to me like the story of two campers being approached by a huge grizzly bear. One started frantically putting on shoes and the other did nothing. He asked him, "why are you putting on shoes? You can't outrun that bear!" The other camper said, "I don't have to outrun the bear. I just have to outrun you."

If you can now work at a 10 times volume, that means that 9 other guys won't have a job.
bagger05
How long do you want to ignore this user?
AG
Or it means that we can create 10x more stuff.
GeorgiAg
How long do you want to ignore this user?
AG
bagger05 said:

Or it means that we can create 10x more stuff.

Cure cancer please. Mom died of it Friday.

F cancer.
jh0400
How long do you want to ignore this user?
AG
Foamcows said:


My coding skills are dulling quickly. I'm extremely efficient now at using AI to build, but if I were to interview for my current job it would be a struggle to pass the coding exam as I haven't written any code in 6 months. Another downside is that the solutions I've made using AI I really don't know very well. I have a high level idea what they do, but not much more because I didn't make them. Previously, after having made a solution, I knew the ins and outs and everything about it. Now I'm as familiar with the pieces of code that I used AI to make as I would be with a piece of code my colleagues wrote.

The other thing I noticed: when I complete a project, because I didn't spend that much time getting my hands dirty with the data, I skipped the part of the process where I would get ideas on what else we could do in the space, or what we could do next, or things that would be nice to have that I decide to go ahead and do because I'm already in there. None of that happens now. When I finish something using AI, I no longer have a long list of brainstorm ideas for the next version.




These are the major challenges I see with AI that don't necessarily lend themselves to making us better overall. Today, we've got tools that make people who understand how things work more efficient at clearing our existing backlog of tasks and ideas. As we become more reliant on AI tools for efficiency, we stand to lose overall knowledge as knowing why something works is less important. We'll all be able to tell time, but few if any will know how to build a watch.

On the ideas side, AI is great at doing but not so good at figuring out what needs to be done next. While this can speed the pace of innovation for existing ideas and backlogs, it poses real risk to the ideas pipeline.
TXTransplant
How long do you want to ignore this user?
What are the big differences between the different "name brand" AI tools? I only have access to CoPilot, but I know there is also Gemini (that's Google, right), Apple AI, and y'all have mentioned Claude here (I'd never heard of that one).

Do they talk to each other? Share data? If not, doesn't that just end up siloing more data and diluting the power of AI?

I'm limited to using CoPilot at work. But if I'm on a program/database that isn't Microsoft (and I work in several of these, both company internal and external) CoPilot is useless.

I understand that non-Microsoft software can be "linked" to CoPilot, but it seems like each software vendor just wants to make that a "custom add on" and charge us more money. That's a real hurdle in the current economic climate.
YouBet
How long do you want to ignore this user?
AG
Great post and insight to behind the scenes reality. Essentially what we were finding before I bailed, but you stated it way more elegantly.
YouBet
How long do you want to ignore this user?
AG
TXTransplant said:

What are the big differences between the different "name brand" AI tools? I only have access to CoPilot, but I know there is also Gemini (that's Google, right), Apple AI, and y'all have mentioned Claude here (I'd never heard of that one).

Do they talk to each other? Share data? If not, doesn't that just end up siloing more data and diluting the power of AI?

I'm limited to using CoPilot at work. But if I'm on a program/database that isn't Microsoft (and I work in several of these, both company internal and external) CoPilot is useless.

I understand that non-Microsoft software can be "linked" to CoPilot, but it seems like each software vendor just wants to make that a "custom add on" and charge us more money. That's a real hurdle in the current economic climate.


IMO, CoPilot brings up the rear of the name brand AI tools. I'm sure y'all went with it because you are already a MS shop and got a deal on it. I would say Gemini and CoPilot are more tool supplements for your work apps (365 and Google Workspace).

Claude and ChatGPT are more search and software development with Claude being the current gold standard on the latter.

They don't really talk to one another because they are competing solutions.

Apple AI seems mostly useless to me so far but I do not own a Mac and only own an iPhone.
TXTransplant
How long do you want to ignore this user?
That's pretty much what I thought. And it causes a real problem when the software you depend on doesn't interface with Microsoft. Just based on how AI is being pushed in the environment I work in, I don't think a lot of people realize that it only works on Microsoft supported programs.

I really only use Microsoft products to communicate my work product - most if my actual work is done outside of that system.

Looks like Microsoft is a major investor in ChatGPT - so maybe one day Copilot and ChatGPT will be integrated?
YouBet
How long do you want to ignore this user?
AG
TXTransplant said:

That's pretty much what I thought. And it causes a real problem when the software you depend on doesn't interface with Microsoft. Just based on how AI is being pushed in the environment I work in, I don't think a lot of people realize that it only works on Microsoft supported programs.

I really only use Microsoft products to communicate my work product - most if my actual work is done outside of that system.

Looks like Microsoft is a major investor in ChatGPT - so maybe one day Copilot and ChatGPT will be integrated?


It's confusing. ChatGPT LLM's power CoPilot but MS uses a customized layer on top of that integrating it back to 365 for their own purposes.

So, in effect, they are already integrated, but not really.
GeorgiAg
How long do you want to ignore this user?
AG
TXTransplant said:

What are the big differences between the different "name brand" AI tools? I only have access to CoPilot, but I know there is also Gemini (that's Google, right), Apple AI, and y'all have mentioned Claude here (I'd never heard of that one).

Do they talk to each other? Share data? If not, doesn't that just end up siloing more data and diluting the power of AI?

I'm limited to using CoPilot at work. But if I'm on a program/database that isn't Microsoft (and I work in several of these, both company internal and external) CoPilot is useless.

I understand that non-Microsoft software can be "linked" to CoPilot, but it seems like each software vendor just wants to make that a "custom add on" and charge us more money. That's a real hurdle in the current economic climate.

YouBet's post had good descriptions. Didn't mention Grok.

For a short, glorious period, Grok was really bad about image and video moderation. You could strip anyone in seconds and trick the moderation sometimes.

Of course, some idiot posted the stuff online and Grok got called out and now we no longer have hotties.

(I kid. - deepfakes are bad)
TTUArmy
How long do you want to ignore this user?
GeorgiAg said:

bagger05 said:

Or it means that we can create 10x more stuff.

Cure cancer please. Mom died of it Friday.

F cancer.

My sincere condolences, GeorgiAg.

Hopefully, all of this new-fangled technology will be able to help us solve a lot of these horrible diseases.
500,000ags
How long do you want to ignore this user?
AG
I do wish when we hear opinions and use cases, people would identify a bit more their line of work and level. A SWE is going to say something completely different than an accountant and a CTO is going to say something different than a SWE.

Yesterday, I was creating some sensitivity tables (that are beloved in finance) and I wanted to make the inputs along the top and left dynamic. So I went to ChatGPT for a methodology and it was ****ing terrible. I ended up finding a YT video that helped much more in about 5 minutes.
Dr. Doctor
How long do you want to ignore this user?
AG
I do chemical plant design work and they have been trying to do "AI" on our work for YEARS.

Every plant/site is different and by the time you designed for one case, the next case destroys the work you do. There are some things you can do to automate things from simulation to datasheets to vendors, but a lot of that has been embraced and (hopefully) implemented in my past life.

The other avenue that's been under scruntiny since at least 2012-2013 is plant automation. But the stream of data from a single unit much less the WHOLE plant is very difficult to map and each plant (even for a single company) is different and takes years to try to implement for each site.

~egon
Hoyt Ag
How long do you want to ignore this user?
AG
Dr. Doctor said:

I do chemical plant design work and they have been trying to do "AI" on our work for YEARS.

Every plant/site is different and by the time you designed for one case, the next case destroys the work you do. There are some things you can do to automate things from simulation to datasheets to vendors, but a lot of that has been embraced and (hopefully) implemented in my past life.

The other avenue that's been under scruntiny since at least 2012-2013 is plant automation. But the stream of data from a single unit much less the WHOLE plant is very difficult to map and each plant (even for a single company) is different and takes years to try to implement for each site.

~egon

I work in power generation and we see the same things you describe. Each of our plants are designed different, even if built by same company and within a year or two of being built. I would venture to say only about 40% of our facilities are synonymous and the rest is much different. We have been presented some AI tools, some successful, but most fail miserably since they are all custom designs and not interchangeable with our other plalnts.
fulshearAg96
How long do you want to ignore this user?
AG
Hoyt Ag said:

Dr. Doctor said:

I do chemical plant design work and they have been trying to do "AI" on our work for YEARS.

Every plant/site is different and by the time you designed for one case, the next case destroys the work you do. There are some things you can do to automate things from simulation to datasheets to vendors, but a lot of that has been embraced and (hopefully) implemented in my past life.

The other avenue that's been under scruntiny since at least 2012-2013 is plant automation. But the stream of data from a single unit much less the WHOLE plant is very difficult to map and each plant (even for a single company) is different and takes years to try to implement for each site.

~egon

I work in power generation and we see the same things you describe. Each of our plants are designed different, even if built by same company and within a year or two of being built. I would venture to say only about 40% of our facilities are synonymous and the rest is much different. We have been presented some AI tools, some successful, but most fail miserably since they are all custom designs and not interchangeable with our other plalnts.

In both of the above examples is this something y'all have been attempting internally or do you have a S.I. supporting this? What type of software package(s)?

In both chemical and power generation you would have some pretty standard opportunities that would scale regardless of differing design builds - predictive maintenance is one example... The difference in design isn't your show stopper. Data volume can also be addressed. Net is this would come down to the right implementation partner, appropriate platform, and a budget. But yes I could DIY as an uphill climb
Foamcows
How long do you want to ignore this user?
AG
For those struggling to get results from AI tools: you're probably not giving them enough context.

Sure, different models have different strengths...some excel at writing, others at code, some at reasoning. But regardless of which tool you use, the more context and examples you provide about what you want and how you want it, the better your results.

One caveat: context is essentially "memory" and each model has a finite amount. You can't just throw everything at it. Be selective and concise. Hit the context ceiling and the tool becomes useless fast. The good news is recent models (at least where I work) are getting more context space, which unlocks what we can do in a single session.

Treat AI tools like an intern on their first day. They graduated from the best school in your industry and know all the standards and terminology (though you might need to explain what your industry is), but they know nothing about how your company or team works. Show them examples. Give them access to your team's initialisms and internal terms.

Don't expect perfect results on the first try. If the output isn't what you want, refine your prompt or add clarifying examples. AI tools get better as you iterate with them. And be specific...vague requests get vague results. "Make this better" is useless. "Reduce this function's complexity by extracting the validation logic" gets results.

Show, don't just tell. Instead of describing what you want, provide actual examples of good vs bad outputs. "Make it look like this, not like that."

One thing to watch for in long conversations: the tool may "forget" earlier context or instructions. If you notice it drifting, periodically reinforce key requirements or just start a fresh session for new tasks.

For my workflow, I've used AI to build an entire repo of information specifically to help it fine-tune outputs to match how I'd do things. When it does something wrong, I add a rule or test to prevent it from happening again. Low lift, high reward. And occasionally I'll test that the AI is actually using that context correctly, because sometimes it ignores or misinterprets what you've provided.

If your AI tool supports knowledge bases, use it to build one and plug it in. When wrapping up work, have it add context to the knowledge base so it's there for your next session. Let AI store it in whatever format works best for its needs.

For ChatGPT, the equivalent is "projects" where you can upload files, images, and sample docs. It also reads from past conversations opened in that project space.
 
×
subscribe Verify your student status
See Subscription Benefits
Trial only available to users who have never subscribed or participated in a previous trial.