Had ChatGPT help me out with my contexts

ivanjay205

Registered
I spent some time with chatgpt today as I was feeling some friction in my contexts and looking for help. Really enjoyed this process. I use two contexts WOB (Work on Business) and WIB (Work In Business) as the president and owner of my business to help me orient to try to stay focused on the WOB side and drive my business forward. However, I have started to feel some friction in prioritizing within those lists..... So I looked to define what is important to me. As I was explaining things to ChatGPT to see what it would come up with I decided that you know what having a document on my desk clearly explaining and defining each one would be great. So after 20 minutes of playing around here is the document "we" landed on. I am probably going to work through a similar process for my personal contexts as well although those are a bit easier to define.

For those that have not seen what I do before I have two sets of contexts.... I have a Mindset section of contexts and a Tool section of contexts. Tool is exactly as David describes in the book (phone, computer, office, etc.) so if a tool is a limiting factor I can go there. But Mindset is if I am not limited by my tool (typical in todays world especially with computer items) I can go to my mindset to pick the right actions
 

Attachments

I spent some time with chatgpt today as I was feeling some friction in my contexts and looking for help. Really enjoyed this process. I use two contexts WOB (Work on Business) and WIB (Work In Business) as the president and owner of my business to help me orient to try to stay focused on the WOB side and drive my business forward. However, I have started to feel some friction in prioritizing within those lists..... So I looked to define what is important to me. As I was explaining things to ChatGPT to see what it would come up with I decided that you know what having a document on my desk clearly explaining and defining each one would be great. So after 20 minutes of playing around here is the document "we" landed on. I am probably going to work through a similar process for my personal contexts as well although those are a bit easier to define.

For those that have not seen what I do before I have two sets of contexts.... I have a Mindset section of contexts and a Tool section of contexts. Tool is exactly as David describes in the book (phone, computer, office, etc.) so if a tool is a limiting factor I can go there. But Mindset is if I am not limited by my tool (typical in todays world especially with computer items) I can go to my mindset to pick the right actions
@ivanjay205

Thank you for your very good post

Just a GTD chime in with all due respect to ChatGPT

For appropriate engagement, have been using a different 'Overarching' Contexts: Internal and External which seems especially helpful for 'delegation clarity' . . . if something Internal is in the 'hands' of someone External then that implies 'delegated' while still being mindful that the ultimate responsibility in the midst of an undesirable outcome remans on this end . . . unfortunately 'life; seems to have little compassion/pity for the disgruntled/whinners ?

Ps. If Organized appropriately the 'prioritizing' will instinctively take care of itself ?

As you see GTD fit. . . .
 
@ivanjay205

Thank you for your very good post

Just a GTD chime in with all due respect to ChatGPT

For appropriate engagement, have been using a different 'Overarching' Contexts: Internal and External which seems especially helpful for 'delegation clarity' . . . if something Internal is in the 'hands' of someone External then that implies 'delegated' while still being mindful that the ultimate responsibility in the midst of an undesirable outcome remans on this end . . . unfortunately 'life; seems to have little compassion/pity for the disgruntled/whinners ?

Ps. If Organized appropriately the 'prioritizing' will instinctively take care of itself ?

As you see GTD fit. . . .
If I am understanding you correctly you are referring to differentiating between what I am doing and I am delegating. I often create next actions for myself to confirm, review, etc. basically as a placeholder to ensure something I delegated is taken care of
 
If I am understanding you correctly you are referring to differentiating between what I am doing and I am delegating. I often create next actions for myself to confirm, review, etc. basically as a placeholder to ensure something I delegated is taken care of
@ivanjay205

Thank you for your GTD reply

All good

Yes, well expressed, "differentiating between what I am doing and I am delegating" on this end, like other realities, would be in the clear realm of the ""External" in the hopes of facilitating/supporting the seemingly all important appropriate GTD engagement

Thank you very much

As you see GTD fit. . . .
 
If ChatGPT helps you, then I’m all for it. I have to say, though, that it’s principal talent seems to be producing some synthesis of what it’s learned, at about the level of a college sophomore, irrespective of area. It does seem to be good at bland, bureaucratic prose, but I’m sure it’s “read” a lot of it.
 
If ChatGPT helps you, then I’m all for it. I have to say, though, that it’s principal talent seems to be producing some synthesis of what it’s learned, at about the level of a college sophomore, irrespective of area. It does seem to be good at bland, bureaucratic prose, but I’m sure it’s “read” a lot of it.
I do think it takes a lot of lead from what I was saying. I use Chat GPT a lot almost as a sounding board to be able to get thoughts out there and hear feedback. I would say 80% of what I was describing as solutions or challenges I lead it to. But I am fine with that as it retains my choice and design of my system. The remaining 20 percent or so it did challenge me and actually disagreed on a few and provided very valid reasons.

Kind of a point counterpoint. But it really did understand what I was trying to accomplish and also help me thesaurus words to find ones that resonated with me
 
Its an interesting exercise, and what you seem to have created is basically a Job Description. I'm not sure whether such a thing is common in the US, but certainly in the UK every job has one. It details everything you're responsible for in the context of that job. It's essentially your areas of focus, pretty much, but fleshed out more than people usually do.
 
If ChatGPT helps you, then I’m all for it. I have to say, though, that it’s principal talent seems to be producing some synthesis of what it’s learned, at about the level of a college sophomore, irrespective of area. It does seem to be good at bland, bureaucratic prose, but I’m sure it’s “read” a lot of it.
If you're using ChatGPT or similar generic online tool, then it will always sound bland because its the middle point of so many voices. Its similar to how people who learned English at an international school have a way of talking that is simultaneously neutral and unusual.

However that's entirely about the data that trained it rather than the capabilities of an AI. If you take a private AI model, feed it in every email, letter and paper you've written in the last 5 years, it would produce output that was incredibly accurate to how you write. Because you would be training it to learn every idiosyncrasy that comprises how you write.

Of course no-one does this because sharing your stuff with an online AI in a privacy nightmare, and local llms are still more like geek weekend projects than mainstream software. But I suspect in the next year or two we will see business grade private AIs start to get marketed, whereby the data is private to you and/or your company, but a shared AI can learn from it. At which point generating output that sounds natural to the users ear will be more common.
 
If you're using ChatGPT or similar generic online tool, then it will always sound bland because it’s the middle point of so many voices. It’s similar to how people who learned English at an international school have a way of talking that is simultaneously neutral and unusual.
More precisely, it has been trained on data which is easily available, and is likely biased towards bland corporate speech. In fact, I believe the primary consumer application will be the production of more bland corporate speech. Consider a chatbot trained to deny insurance coverage whenever possible- I’m sure it will be very polite.
 
More precisely, it has been trained on data which is easily available, and is likely biased towards bland corporate speech. In fact, I believe the primary consumer application will be the production of more bland corporate speech. Consider a chatbot trained to deny insurance coverage whenever possible- I’m sure it will be very polite.
And as time goes by and people and institutions will protect their IP, all 'easily available' data left is whatever is constructed by the LLM's...
 
There are already licensing agreements for various repositories, and there will be more.
@mcogilvie The funny or frightening thing is that the human-created content will be less and less accessible for AI while the AI-created content will be freely accessible for AI. It may cause exponential progress in… hallucinations. :D
 
@mcogilvie The funny or frightening thing is that the human-created content will be less and less accessible for AI while the AI-created content will be freely accessible for AI. It may cause exponential progress in… hallucinations. :D
Maybe. I don’t know how the guardrails for hallucinations work. There’s a related aspect to AI feeding on AI-generated content: can AI generate anything truly new? It seems clear that large language models will mostly come up with new combinations of what they have been trained on, but occasionally come up with “hallucinations” which are outliers in the space of generated content and objectively false. But do the models ever produce something which is correct but not previously known to be correct?
 
There’s a related aspect to AI feeding on AI-generated content: can AI generate anything truly new? It seems clear that large language models will mostly come up with new combinations of what they have been trained on, but occasionally come up with “hallucinations” which are outliers in the space of generated content and objectively false. But do the models ever produce something which is correct but not previously known to be correct?
@mcogilvie Most "new" things in the world are just "connected dots". Often a new connection of dots is made possible by other earlier connections. For example Uber is nothing new – it is just a cab service connected with a software as a service. It was possible because of earlier connections of car+driver, internet as a software services platform. and phone+computer.

But it may not work with random dot connections. Or would it? Brainstorming seems to work… to some extent.
 
@mcogilvie Most "new" things in the world are just "connected dots". Often a new connection of dots is made possible by other earlier connections. For example Uber is nothing new – it is just a cab service connected with a software as a service. It was possible because of earlier connections of car+driver, internet as a software services platform. and phone+computer.

But it may not work with random dot connections. Or would it? Brainstorming seems to work… to some extent.
As a practical matter, maybe you’re right: I tell a LLM I want a new business using digital infrastructure to modernize an archaic sector of the economy, and out comes the next YouTube (begun in 2005), Pornhub (2007), or DoorDash (2013). However, there may be valid utterances which a given LLM will not produce, and perhaps there are some that no LLM will produce. I’m imagining something like Gödel’s theorem.
 
Its an interesting exercise, and what you seem to have created is basically a Job Description. I'm not sure whether such a thing is common in the US, but certainly in the UK every job has one. It details everything you're responsible for in the context of that job. It's essentially your areas of focus, pretty much, but fleshed out more than people usually do.
Definetly is common but at the higher executive levels funny enough they are not always as common. So I think it was a healthy exercise to go through to really identify how I want to work
 
I am definitely intrigued by this sort of use of AI. Thanks for creating the topic. It has been very interesting to follow.

I suppose the old fashioned version of the same would be going for a coffee, a beer or a walk with a friend or colleague. I haven't tried the AI version. It may be efficient and effective, and I get the impression it is very keen to please. The human version on the other hand can be awkward and contrary. It can challenge me in ways that I don't expect and can see things from a totally different perspective than I do. It can champion my efforts or call me on my BS.

There is obviously room for both but I wanted to chime in that a person may have different things to offer than an AI, even if the AI were able to come up with original insight.
 
I do not disagree with that point as well. I laugh that my wife is the best of both worlds. Very supportive at the right time, but if I am acting like an idiot she is not shy to tell me lol.

That being said, I took a leadership personality test a long time ago and one of the things that came out of it that is very true is I do best when I have a sounding board in my decision making. I know what I want to do and what I believe is right, but I function best when I talk it out.

So in a way this was a method to do that. I guess selfishly I can steer it where I want to. But there was one point I felt was right and it did concretely disagree with me and told me so lol. That was very interesting to experience.
 
Top