top of page
Schedule a Consultation

Have you read our blog and still have questions? We offer no-cost consultations to portfolio firms and organizations seeking further advice or assistance in strengthening and growing their Product and Tech teams. 

​

Sign up now to schedule your session with one of our expert principals. 

Recent Posts
What's next?

Interna principals present at events worldwide. We send out a monthly newsletter with information on where to find us next and how to stream our talks to wherever you are. Our newsletter is filled with additional tips and tricks for executive leadership and the latest stories in global tech news.

 

Stay up-to-date by subscribing to our newsletter using the button below.  

DavidSubarHS1 (2).jpg
I'm David Subar,
Managing Partner of Interna.

 

We enable technology companies to ship better products faster, to achieve product-market fit more quickly, and to deploy capital more efficiently.

 

You might recognize some of our clients. They range in size from small, six-member startups to the Walt Disney Company. We've helped companies such as Pluto on their way to a $340MM sale to Viacom, and Lynda.com on their path to a $1.5B sale to Linkedin.

​

Interna Talks 2 - Balancing Human Expertise with AI Advancements




Generative AI, particularly large language models like GPT, has ushered in a new era of possibilities for product management and the process of building products. However, this rapidly evolving technology brings forth unique challenges that require careful consideration and adaptation. In a recent conversation among experts, the discussion revolved around the impact of generative AI on product management, the need for faster release cycles, the role of feedback and user adoption, and the potential benefits of AI in streamlining tasks. This blog post explores the key takeaways from their discussion and highlights the complexities surrounding the use of generative AI in product management.


One of the foremost challenges discussed was the rapidly changing nature of generative AI technology. Participants acknowledged that AI systems like GPT can sometimes "make things up" or produce confident but erroneous outputs. However, it was noted that providing the appropriate data and context can reduce the likelihood of such errors. This highlights the importance of understanding the limitations and nuances of AI models to ensure accurate and reliable outputs.


Feedback and user adoption play a crucial role in product management, and generative AI has the potential to enhance the analysis of user data. By leveraging AI, product managers can gain deeper insights into user preferences, behavior, and patterns. However, the need for fact-checking and editing when utilizing generative AI as a tool for product managers. While AI can provide valuable insights, human expertise is still indispensable in ensuring the accuracy and relevance of the generated information.


The conversation highlighted that relying solely on AI without comprehending or verifying its output can lead to unintended consequences. Just as using copied code snippets without understanding can result in errors, AI should be seen as a tool rather than a substitute for expertise. Jeff, emphasized that AI should serve as a starting point, enabling product managers to leverage its capabilities while combining it with their domain knowledge and experience. This fusion of AI and human expertise can lead to more robust and reliable product management processes.


Eddie, brought attention to the scale and speed at which AI operates, amplifying the challenges for product managers. Validating and controlling the generated systems become more complex as the scale of AI increases. Particularly in Agile development, where requirements may be poorly defined, ensuring that AI-generated code aligns with the desired criteria becomes a daunting task. This calls for a careful balance between embracing AI's efficiency and maintaining rigorous quality control measures.


Generative AI, exemplified by large language models like GPT, has undeniably impacted product management and the process of building products. The conversation among experts highlighted both the challenges and opportunities associated with this technology. While AI has the potential to streamline certain tasks and provide valuable insights, it is crucial to recognize its limitations, validate outputs, and combine AI's capabilities with human expertise. By leveraging generative AI effectively and keeping up with its advancements, product managers can navigate the complexities of modern-day product development, ultimately delivering better products and experiences to their users


Transcript:


[00:00:05.410] - David

Hello everybody and welcome to another one of the internal Fireside chats we'll be talking with. It about a number of things about AI and other stuff today. And so why don't we just jump in? So let's have an AI product management. I was presented at a conference in New York about a week and a half ago and one of the questions I was asked was how does particularly generative AI and large language models chat GPT, has it changed product management and AI in relation to it and how we build stuff? I've got a bunch of thoughts about this, but I don't know if one of you guys wants to jump in before I tell you some of my thoughts, I'm going to go with no. Then I'm going to tell you what I think. I'm going to tell you what I think. With generative AI, we have two additional problems. We know with building products, we want to build small and release constantly to get market feedback. We know that, we believe, we know what the market wants. But until something gets out there, we don't really know. And by releasing something, we change the market anyway.


[00:01:34.460] - David

It's the Buddhist. You can't walk twice in the same stream. But with generative AI, we have new problems. The two new problems we have is the technology is changing quickly. So what we can do today is different than it was yesterday and maybe quite literally in today and yesterday. And then by working with it, we discover new things we can do. So my argument is because the technology is changing so quickly and we're learning so quickly that we have to release even faster, we have to release all of the time. And so we need to think about the engineering process differently, the product management process differently and the feedback process completely differently, faster than we ever did before. And so even if you're releasing multiple times a day, you need to instrument the systems so you can understand the impact and be able to analyze that impact. And iterate the next rethink what you're doing faster, not just releasing multiple times a day, but think about what's on the roadmap much quicker. Do you guys agree? Do you guys disagree? And to the extent that you agree, what changes does that mean?


[00:03:06.240] - Jeff

It's a new idea to me. And we've spent a lot of time, I mean years, learning to do that release cycle as quickly as we can. So if this somehow requires us to be faster, does it also somehow enable us to be faster? Because I'm all for making that cycle as quickly, as quick as possible, but I think that we all probably do that already. So what's really changing here?


[00:03:35.080] - David

My argument is changing is the feedback cycle.


[00:03:39.400] - Eddie

Go ahead. I'm not sure if the feedback cycle gets faster because of variative AI, you still require user adoption. You still need to collect information about usage. So you can analyze product market fit you're at the early stage or impact of your changes over time. You may with AI technology, you may be able to analyze the data they've collected from the user a little bit faster, maybe get a little more insight into it. But I don't think that's something that necessarily shortcut the whole most of the time in terms of actually getting the user adoption. I'm thinking about in terms of the impact to product management, not so much more in terms of it as a tool for the product managers, right. And it's not a whole lot from my perspective. One part of it, it's not unique to product managers. It's just a tool that is very good at summarizing generalizing, making sense of things, but with the caveat that it struggles with truth sometimes and facts, it is very persuasive. It really digs in on a hypothesis, right, overall. And so what that means is it demands product managers to be a lot more in a love way they already are editors and fact checkers of information instead before just a lot of the information and semiformlated claims and requests from the stakeholders.


[00:05:32.460] - Eddie

Now you're getting it from a much broader, much bigger thing called generative AI, but it really puts a lot more demand on prop measures to fact check and edit.


[00:05:44.500] - David

Go ahead, Mark. I'm sorry to interrupt you, Eddie, but go ahead.


[00:05:47.220] - Eddie

Mark hello.


[00:05:49.010] - Mark

A couple of comments. One is, yes, I think AI will have an impact on product management. And yes, I think it can shorten cycle. I think though, we have to think through where we're at in the AI adoption cycle. And you have to also believe that AI will continue to get more powerful and perhaps more dangerous over time. But where it's at now, there are many aspects of the whole product management cycle where AI could be useful, including, I think, Eddie also alluded to this gathering collating, making sense of information in terms of feedback from users, cutting down on the time that it takes to do research, taking on some of the grunt work that product managers have to do. Not replacing the product manager, but potentially making their jobs easier, more efficient, and maybe replacing some of the lower level analyst type functions that we have to invest in today.


[00:06:38.420] - David

So is your argument then we don't go faster because we need to go faster because the tools are changing, our knowledge of tools are changing, we're going faster because we can because our product management function has less friction. Is that your argument, Mark?


[00:06:57.090] - Mark

Yeah, I think that sums it up.


[00:07:04.390] - David

Let's say you get the feedback that Mark's talking about. This is for anybody. You get the feedback that Mark's talking about and you get a bunch of data and you're in the proc group and you're having your instantiation of large language models, generative AI read that data. You're in product management. There are some kind of hallucinations that your generative AI can do, how does that change your process? How do you think about it.


[00:07:39.170] - Mark

Doesn'T remove the need for quality control, for one thing.


[00:07:44.450] - David

Does it increase it?


[00:07:47.750] - Mark

I don't know.


[00:07:52.230] - David

I'm very concerned about hallucinations.


[00:07:56.710] - Jeff

Very concerned about what?


[00:07:58.380] - David

Hallucinations, the generative AI just making stuff up. It seems to just as Eddie was pointing out, it seems to do that with some frequency. Always confident, never in doubt. Sometimes makes mistakes, but never in doubt. What do you guys think? Good.


[00:08:21.420] - Mark

I think it does. But does it do it so much when you are feeding it the information you wanted to analyze and summarize and collate? I think it does. It less so in those scenarios than when you're simply just asking information. In other words, if you feed it the appropriate data, you're likely to get back the appropriate response with a less likelihood of hallucination. That's my understanding anyway.


[00:08:43.670] - David

So your argument is choose the right corpus and you're more likely to get the right results.


[00:08:48.300] - Mark

That's right.


[00:08:51.210] - Jeff

From everything from generating pros to generating code, we've always had people who've been inclined to Google stuff, copy and paste, have code, snippet files, et cetera, et cetera, et cetera. And that's always led to things that had unintended consequences when they did that with something they didn't understand or didn't verify. Um, I look at this as better technology, but with the same requirement, neither in kind of generating pros or generating code or anything else. Is this a substitute for knowing what you're doing? And you may get an idea, structure, something to work with, something you can edit, but I think mostly it demands of us that we not regard that as the end of the process, but the beginning of it.


[00:09:40.630] - Eddie

It's true, but at the same time it's happening at a much larger scale, at a much bigger, faster speed. It's like having gunfight versus having nuclear weapons. Yes, you are having a weapon, but it's just in a way bigger scale that is a lot harder to control when unleashed right in one polar. And now you have system, you have people use generator to generate complete systems even. And it becomes a lot harder to validate that things are actually working correctly. I'm sure we've all tried as chachi V Two to generate our profile and it's just possible that I could have done that because of what I do. But no, I didn't actually do that. It becomes a lot harder from a product manager perspective when a lot of times, especially in Agile World, that stories a lot of times, unfortunately, are not well defined from an acceptance criteria perspective. And it becomes a lot harder to determine, hey, does this really do what I want it to do? I'm talking about using general AI to actually code generic.


[00:10:56.750] - David

Mark?


[00:10:58.110] - Mark

Yeah, one more thought. We tend to jump on things very early and that's good. We should. But we have to recognize it's very, very early days. We're really seeing generative AI in the mainstream, and so the story has not yet fully unfolded yet. One very important takeaway for me is that it does behoove anybody in software development, and frankly, maybe in many, many fields besides that, to really understand where AI can be useful and not get left behind because your competitors will be doing it. I've seen people jump on AI to make product pitches and to come up with the latest and greatest new idea or to do some of the PM work we're talking about. That's not necessarily to say that it is ultimately, over the long term going to be that much more successful than humans were. I don't think we know yet. So watch this space.


[00:11:45.950] - Eddie

Right? I think that actually that's a really good point. I think that backs the question of just like anything we do, right? We have better tools. Where should we focus our time, where should product managers focus their time? And I found myself having actively dissuading people from going to college to learn software development, CS degree. So I found it kind of interesting when, as I think about it, learning software development is not the most important thing if you want to work in software development, and that's especially true for product manager. Now the question is all these tools. The question is where they should be. Really adding value over the UI to it will probably never replace the visionary enough star. What should the company be? What's the ultimate goal? What industry space? What problem is trying to solve? You're not going to ask Terry High what problems should I solve?


[00:12:56.790] - David

Sorry. That brings me to the next. You bring up software engineering and dissuading people from going into software engineering undergrad programs. I want to talk prompt engineering for a bit. I have this argument of prompt engineering. It's called engineering. Maybe to be cute, but maybe not accidentally. My experience, particularly with Chat GPT, but some of the other generative AIS, is if you don't write your prompt well, you won't get stuff that's particularly meaningful. My model here is that we've had a series I think I talked about this last time in the history of computer science. We've had a series of higher level languages, and each higher level language becomes much more expressive in a much more limited field and much more powerful in that field, being HTML to express interfaces, abstracting away interfaces, for instance, powerful language in a more limited area. Prompt engineering, I would argue, is an even yet more powerful language, but it has two attributes that are different than other languages. One is it's not well constrained. You can say anything. And second, it's not only not well constrained, but the nice thing about higher level languages is it's going to be very powerful in its domain, and it's going to be right in its domain with prompt engineering, as we talked about before, it might be wrong in its domain, right?


[00:14:41.630] - David

So it's wider, it's less constrained, and less likely to give the right result is prompt engineering. Engineering.


[00:14:54.190] - Eddie

I'll take that first. It's not to me, it's journalism. The biggest differentiation is to other high level languages. It's nondeterministic. Prom engineering is good prom engineering is good journalism where you are asking the right question to get true insight from the person you're talking to.


[00:15:18.150] - David

Mark, I think you're going to disagree, and I think Jeff's going to disagree too. Actually. Let Jeff go first.


[00:15:26.230] - Jeff

I'll surprise you, but there's more. I agree that with Eddie about how it's being used now, but it just so happens I had about a three hour conversation with a friend colleague who's trying to do a prompt engineering startup, and a lot of his interest is around things like solving the constraint problems that Eddie raised. Or maybe it was Mark. I'm sorry. For instance, if you want this thing to be the embodiment of NPCs and games, instead of just having a little exclamation point above their head and they have a set speech and if you stay long enough, they repeat their speech.


[00:16:06.650] - David

It can really for the audience who.


[00:16:09.350] - Jeff

Doesn'T know NPC, non player characters. In a lot of immersive worlds, there are certain people who are actually played by the computer that you have to talk to and they say, oh, there have been bats invading the woods over here. We need somebody to go kill 20 bats. And that gives you an extra quest or whatever. So this is a case where the conversation, being more native and fluid and more human like, makes the game better. But there are also fixed points you have to get it to where it has to communicate those things, and that's a problem they're interested in trying to tackle. And that's useful in not just games, but a wide range of possibilities, such as using them in customer service and in other spaces. So I think there is potentially an engineering practice in learning how to interact with these systems to get them to behave in certain ways. But so far I don't think that's really developed. I think that's in the future, rather than something we see today, right now, I think you spit some text, it spit something back. You like it or you don't. You try to say a different thing, ask a different way.


[00:17:22.810] - Jeff

But when you can start to control some of the behavior of the underlying systems, I think that'll look more like engineering.


[00:17:29.870] - David

So your argument is I'm putting words in your mouth. You didn't actually say this, but it should be engineering, but we're too ignorant to make it engineering yet. Is that your argument?


[00:17:41.250] - Jeff

Essentially, I would have worded a bit differently. I'd say the tools are a little immature to approach them in an engineering fashion, and in this kind of industry, that's months, maybe not years.


[00:17:56.230] - David

So Jeff and Eddie both are saying we don't need to hire someone with an engineering mindset to interact with generative AI to get good results.


[00:18:06.730] - Eddie

That's actually not quite what I'm saying.


[00:18:09.610] - David

Great.


[00:18:10.650] - Eddie

There are skilled journalists have their common practice, right, to get good answer. So there is a science to it. It's not all just some magic person that just knows to talk to people. So there is a science to it. So it's not quite engineering. But I'm not saying that it's definitely a skill, definitely a learned skill, and.


[00:18:40.150] - Jeff

It'S an analytical one and one that's organized in ways that engineers tend to think. And I said don't think journalists do, realizing that if you include this word or you ask for this kind of structure or something, it has this kind of effect, and then engineers are good at that kind of thinking. I don't think that writers think that way as much. I think they're more likely to spit something out and get something back and then try to edit a bit.


[00:19:05.550] - David

Mark, what say you?


[00:19:07.470] - Mark

I think I'm somewhere in between Eddie and Jeff. There's a surprising amount of literature on prompt engineering, actually, and I've seen articles encouraging the use of prompt engineering and sort of attempting to train people and how to get the most out of LLM and AI in particular. But I don't think and this is probably where I agree with both I don't think you need a hardcore engineering skill set in order to take advantage of that. You got to be logical. It might be better up being I don't know if you still teach logic formally at university, but if you could be a logic major or an English major, perhaps you might be just as well off as if you were a computer science major. The two exceptions to that one is if the AI interface is designed for prompt engineering, in other words, that's the way it wants to interface with outside world, unlike Chat GPT, which I don't think is but if there's an interface that's designed for hardcore prompt engineering, well, there's a way to take advantage of it. And the second one would be, of course, if you're using AI to generate code, then you'd better be an engineer, I think, in order to do that.


[00:20:10.390] - Mark

And maybe that's another place where prompt engineering comes in. So those two caveats.


[00:20:16.130] - David

Yeah, boy. What you're saying, Mark, reminds me that Shakespeare might be wrong. He said in one of his plays I think it was Macbeth first kill all the lawyers. It turns out there may be use for them logical and knows how to interact with things.


[00:20:31.600] - Mark

First, kill all the lawyers, then kill all the engineers.


[00:20:34.130] - David

Yeah. Let's stop at step one.


[00:20:39.690] - Jeff

Note to stealth watch the lawyers because it may be coming for me next.


[00:20:44.290] - David

Exactly. Okay, so we have all this cool AI stuff, or maybe not sometimes scarier for some folks. Should it be regulated? Now, I know that this is a potentially interesting conversation. Regulation could kill you. That's why I'm asking. Go ahead.


[00:21:08.950] - Jeff

All right. In addition to just in general, as a philosophy of law, I think that it's far better to take liability for harm that's done rather than prior restraint through regulation. I think it's particularly important in our industry. I think the reason why we've essentially revolutionized the world economy in the last 30 years is because we were left almost completely alone. And when in the later stages of some kind of technology you start to see things like GDRP or socks or some of the stuff that happens after decades of doing it, the same thing always happens, which is there becomes a much bigger focus on bigger organizations that can hire and support compliance departments and things of that sort. And it extremely selects against startups and small individual projects because they can't afford that overhead. And AI is just at the very cusp of just starting to be important. And doing that anytime soon, I think will absolutely crush it by eliminating startups of the sort that I just described my friend trying to do. The only countervailing argument would be giant imminent major threat that AI is different from other things because they're going to try to take over and become our robot overlords.


[00:22:27.370] - Jeff

And I just see no credible argument for that.


[00:22:29.900] - David

I have a different argument. Misinformation generation. We have an election coming up. What's that? More misinformation generation making politicians saying things that they didn't actually say in a credible way, having data support it, that you then generate and create and publish on whatever a news site is that there is a significant potential threat with generative AI to do things in mass that were never able to do before.


[00:23:15.300] - Eddie

And I think that's where.


[00:23:18.820] - David

Mark no, I was going to say marker. Eddie, we want to go.


[00:23:23.880] - Eddie

Yeah, I think that's where the scale of how quickly and how sophisticated the misinformation can look, when you generate that through some kind of AI technology versus individuals, there's no I wouldn't say no difference, but how do we regulate people spreading individuals spreading misinformation like telling lies? How do we regulate that? How is that different from regulating a piece of software, generating the same information? The big difference is in the scale, right? It's not the intention of it. So I don't have an answer. I'm not a lawyer. My point is killed them.


[00:24:06.040] - David

Shakespeare just kind of killed all the lawyers.


[00:24:09.240] - Eddie

The same way we regulate people doing bad things, right? We should have similar regulations around people deploying systems that do that kind of thing, at least intentionally. But the risk is much higher for a system to go awry and intentionally because just scale and the speed and the sophistication in terms of visualization and the look video, we all seen deep fake videos. I don't have good answers, but I definitely do think regulation half a place.


[00:24:48.690] - Mark

It needs to be regulated. Bottom line. However, I think we have to recognize the challenges doing that. It's kind of an arms race. We can't assume that China and Russia will want to regulate AI or will follow our regulation standards. In fact, we should assume they probably won't. And that in itself presents some kind of an existential threat. But today what is it? The AI safety Institute. That may not be the exact same that may not be the exact right term. One of those AI safety institutes, actually a center for AI safety, said today that AI represents the threat of an extinction event for humanity. Wow. And they're all signed onto that, all the leaders. And when you listen to Sam Altman in Congress, these guys seem to be begging for oversight. And as usual, our government is several steps behind in regulating technology. But I think we need it. We can't assume that these companies are going to regulate themselves. How you deal with the threat from foreign powers, I don't know. Have to leave that to somebody else. It's above my pay grade.


[00:25:49.490] - David

I'll let Jeff have word here and then we'll move on. Go ahead.


[00:25:53.670] - Eddie

I think my brother a really good point. Our government is notoriously bad at dealing with all things technology. Half of our senators don't know how Facebook make money. It's free. Why? How? So the question is and we have parties, Mark pointed out, that is not going to respect any kind of regulation, even if we have good regulation. It's more of a question what does the industry do to survive? I don't have an answer to that.


[00:26:31.890] - David

I'll let Jeff have the last word here.


[00:26:36.850] - Jeff

I might feel differently about this years from now, but I think that in the short term, to raise the kind of concerns, particularly those that you raised, Eddie, is to define outcomes that are unacceptable. For instance, we have laws about slander, we have laws about using a person's likeness without their permission and compensation, et cetera. And those may need to be adapted because of things that simply aren't addressed in law because they weren't possible before. But that's different from trying to regulate the structure of how AI systems are evolved or used and rather you can might be able to solicit a murder online in the dark web doesn't call for regulation of the Internet so much as it calls for murder being illegal and making sure the enforcement investigation can support that. Being able to create politicians saying things they didn't actually say, probably because of public figure restrictions and some other things aren't strictly illegal. Now, I would welcome changes along those lines because we're now able to do that and doing it as destructive. But I think it's just too much of a break on the entire industry. And the very external parties, like other countries that you're concerned about, not only might do their bad things, but will get far away from us if we have to suffer the break of AI regulation and try to function.


[00:28:04.430] - David

Okay, lightning round. One word answer. Maybe I'll go for more. The evil AI. Overlords killing us all. Percentage chance. Jeff, you're first.


[00:28:20.150] - Jeff

Zero.


[00:28:21.350] - David

Zero. Eddie?


[00:28:24.310] - Eddie

100%. Depends on time frame.


[00:28:28.630] - Jeff

Well, okay.


[00:28:31.910] - David

In the next ten years in the next ten years. The evil overlord is killing us. Jeff, I'm going to start again because Eddie made me better. Define my question. Turns out that prompt engineering for David is a problem. Jeff, in the next ten years now.


[00:28:46.050] - Jeff

I feel more comfortable with my zero. Okay, Eddie, speak to the next thousand.


[00:28:51.790] - Eddie

50%, but indirectly.


[00:28:55.150] - David

50% indirectly. Okay.


[00:28:57.030] - Mark

Mark 0%.


[00:28:59.970] - David

I'm going with the correct answer. Zero.


[00:29:04.290] - Mark

Yeah, we won.


[00:29:06.770] - David

It's like the McLaughlin group. He always came back with the correct answer. Okay, I'm changing topic.


[00:29:13.510] - Jeff

You've been voted off the island. Sorry.


[00:29:20.630] - David

I'm changing topic completely. Agile offshore. Asynchronously you got some teams, you've got some teams where you're at some teams offshore, they're working different ships. How do you do agile in that world?


[00:29:46.510] - Jeff

I can add a little bit at the beginning because I've done this a lot over the last 15 years and I've always just arranged the stand ups and ceremonies at times that were plausible overlaps. If it's Europe, you're usually doing it in the middle early in the day in the US. And vice versa if it's somewhere over in Asia and it requires some extra work because there's some things beyond just daily stand up and other ceremonies that you have to interact to be doing agile correctly. But it just requires some stretch on both sides. That's the only solution I found. Maybe somebody else has a more in depth answer.


[00:30:27.690] - David

We'll let someone else go first. Eddie, you got a thought here?


[00:30:33.070] - Eddie

To me, the stand up is actually probably the least important ceremony that needs to happen. Synchronously. I think the retro refinement is actually to me are more important. I think for the stand up there are a lot of stand ins. There are all kinds of chat bots tools or even TikTok like I've seen I've used tools where people take TikTok like video updates and then they send it together and be forced reminded to send 10 minutes in the morning to watch everybody's video. That actually was quite effective because a lot of times when it's not done well, stand ups devolve into basically status update instead of focusing on how do you unblock each other. So for that, I think Nvidia is very effective in helping create a sense of togetherness despite the fact you are not my daughter. Have best friends that she has never met in person, never been in the same continent, but they have best friends because they see each other video, they watch each other's TikTok and instagram. I think there are all kinds of asynchronous communication tools, especially like short video that can be pretty effective. But to just point there's nothing to me but there's nothing that can totally replace direct indirection.


[00:32:09.410] - Eddie

But those can be limited to just the key ceremonies. The retros right in the refinery. Even the refinement can be done asynchronous to some extent.


[00:32:21.030] - David

Mark, do you have something before I chime in?


[00:32:23.590] - Mark

I think team structure is important. I know not everybody has the flexibility, but if you want to do asynchronous agile, I would argue that you organize your teams along geographies so that each team can work independently and be co located. And you only need to get together when you need to sync up. Often that needs to be but if you have one person here and one person there, and they're all in different time zones, you're making it very hard on yourself. And why taking all that pain unless you absolutely have to?


[00:32:52.990] - David

I am in the same camp that marks that is try to collate your camp with a nuance, with nuanced difference. It depends on how much change you have in your product. If you have a highly innovative product where you're learning from the market, you have to iterate quickly. You better have your teams be able to talk frequently at low friction. If you have a very well specified product that doesn't change this API, talks to this API, then having dispersion among people, particularly product managers and engineers, becomes a lot easier. The thing that concerns me is one of the fundamental tenets of doing agile is favoring conversation over documentation. And so to the extent that we scatter people on a team to a variety of places and we make them document things in the post having conversation that concerns me, that we're getting less agile. I think Mark's got an additional note here.


[00:34:07.190] - Mark

If I've learned one thing about Agile over the many, many years, and it's maybe only one thing, to be honest, and if I'm even more honest, it's something I kind of decided deliberately to.


[00:34:17.630] - Jeff

Ignore in the beginning.


[00:34:18.670] - Mark

It's the value of people over process, which, if I'm not mistaken, correct me if I'm wrong, is actually in the manifesto. And there's no substitute for that. If you have great people, you can make almost anything work.


[00:34:34.110] - David

Well, great. I think that's all for today. Thanks everybody. And thanks everybody who's listening. And people have any questions, feel free to reach out on the website, there's a contact us and we will free. We'd be glad to respond to more questions. Thanks everybody.


[00:34:52.630] - Jeff

Thanks.


[00:34:53.610] - Mark

Thank you.





Comentarios


bottom of page