What Does Good Look Like? Teaching AI to Think Like Your Best Contributors

As AI permeates its way into every aspect of corporate jobs, understandably companies are moving quickly to evaluate tools, explore capabilities, pilot, and implement use cases. These applications can be across all different business groups, helping to enable efficiency, expand creative outputs, speed up software development, and more. For some companies, they’re already seeing results, at the individual level where a programmer has sped up their output through AI-assisted development, or at the company level as new products or capabilities are developed and rolled out, powered by AI. The companies that succeed will be the companies that are strategic in their implementations, thoughtful in their application of AI, and truly invest in improving workflows and processes with AI assistance.
Change can be difficult, especially for companies that are not already set up to be innovative, empowered to try new things, or ready to make the necessary changes. Add to that the technical questions, the quality concerns, and the existential fears that individuals and companies have about AI, and there are bound to be bumps in the road to adoption and ultimately, value. While companies rush to eke out the maximum productivity from AI, or even while companies are waiting and patiently exploring, there are several positives to come out of this major transformative technology.
We are quickly moving past the binary litmus test of “Are you using AI?” to a more substantive question, “Are you using AI well?”
We’ve been using AI for years, from predictive analytics, to machine learning, and now even more with generative and agentic AI. We’ve learned from adopting AI internally to co-innovating with our clients, helping to incorporate AI into their workflows and alongside our teams, maximizing and enabling teams on the AI capabilities built into the partner products we support, and dreaming big and building amazing new experiences with AI at the center.
Exploring The Hidden (Non-Technical) Benefits of AI Adoption
Let’s talk about some of the hidden benefits that are coming with these seismic shifts in AI adoption. These concepts aren’t new, but they are often overlooked or skipped entirely. Share these tips with your teams or executives, but brace yourself for the chorus of product managers and data engineers saying “We told you so!”
Yes, AI can do that. Likely. Whatever it is. Most things. Some things. Some better than others. Maybe not now, but soon.
How do we make AI do it better? Whatever it is, these guidelines apply. We might turn over certain tasks to our AI assistants and tools, but how do we make sure we are not giving up control of quality?
We recently worked on a number of AI projects where we sought to augment content creation with AI assistance, drafting content for articles, social media, marketing material, and more. Once again, we know that AI is capable of this. Ask your favorite AI tool to write you an 800-word blog post on any subject, and you’ll get exactly that. Is it any good? Maybe. Is it good enough for our website? Definitely not.
While the stated goals often start with ‘speeding up output’ or ‘increasing velocity’ - the approach we took also included ‘improve quality’ and ‘increase consistency.’
In the examples below, we tackled content creation with AI in the same way that we would tackle onboarding a new team member. AI that, just like our human team members, will need clear guidance, standards, and structure. When we put in the right ingredients - detailed instructions, defined outcomes, and quality expectations - we get better, more consistent results. And we create scalable practices that serve both humans and humans using AI across the team.
What are We Putting into AI?
This section is almost a given, and I almost left it out completely. It’s hard to beat the age-old adage of “Garbage In, Garbage Out” when talking about collaborating with AI. We’ve used this phrase across our website when talking about Analytics tagging, SEO, Product Information Management, Content Creation, you name it. Yet it holds true, now more than ever.
AI models have access to large training libraries and at first glance, can sound coherent or smart. But if everyone has access to the same libraries and uses the same AI tools, we’ll quickly divulge into a sea of sameness and blandness. Your company is unique, your viewpoints should be differentiating, and the more of YOU that you can feed into AI, the better. AI tools including custom GPTs and processes often allow you to upload centralized company-specific resources and information that is unique to you and not going to come from a specific engine. Centralized knowledge and resources can set the stage, then individual users should be putting in the intelligent and unique ingredients to make an excellent output.
Using our marketing team at Bounteous as an example, we see AI as an accelerator or way to empower teams, not to replace the incredible knowledge or creativity that we have at the organization. We are using our human counterparts to shape and refine those thoughts, give feedback, debate and argue, and create those viewpoints. These ideas come from our vast industry and technology experience, our training, and our shared experiences. We can use AI to extract those ideas, translate and summarize slides or meeting transcripts into different formats. Put it all together, and we’re bringing the great ideas or thought leadership to life more quickly, or more clearly.
Keep this in mind as you read the rest of this post. Let’s adopt processes or instructions that help our whole teams and lean into the mechanical advantages of AI.
Why AI Needs Better Instructions
One of the biggest opportunities in using AI today isn’t hidden in new capabilities, it’s in treating AI like a collaborator. If we expect it to create meaningful work, we need to give it the kind of information we’d share with another person. That means getting specific. What are we trying to achieve? What do we want it to produce? What does “good” look like?
Rather than asking AI for help in vague terms, teams should be writing down the same kinds of directions they’d give a new colleague. Too often, these company standards are passed down from team member to team member, shared in vague terms, or expectations that are never quite clearly communicated.
If you already have this documentation created and shared internally, you are off to a great start. For most companies though, it may feel like walking backwards. Before we even get to play with AI tools, we need to stop and define things we might take for granted.
These instructions should be documented and distributed, and going through this exercise not only will be useful and mandatory as we set up our processes but will benefit your existing teams. How well can each team member recite the general guidelines for format and structure, tone and branding? Setting standards, communicating clearly, and gaining consensus - these are huge wins!
Just the act of starting an AI project like this almost certainly brings additional benefits to the humans working on the project and getting this information shared with them. Over time, those directions evolve into repeatable prompts and workflows that allow AI to operate more like a reliable teammate and less like an experimental tool.
This shift also forces teams to clarify their own thinking. What are we actually asking for? How do we define quality? Who are we targeting? If we can’t answer those questions ourselves, we can’t expect an AI to.
Leaders play a key role in this process. As teams grow more comfortable using AI day-to-day, leadership should regularly check in to understand how tools are being used, what kinds of outputs are being produced, and whether quality expectations are clearly defined and upheld. The presence of AI doesn’t eliminate the need for oversight - it reshapes where and how that oversight happens.
Define Quality, Then Measure Against It
At this point, we’ve created the instructions for WHAT we are creating, like the specifics for length, tone, etc. Using our blog post example, now our AI tool is helping to create an article, using the amazing ingredients we’ve given it, with the desired length and format, utilizes our style guide, and looks like the other content we are producing. Many teams would stop there.
It’s important to go a step further and define the rubric for what quality looks like to you, your team, or your company. When we define what makes a “great” deliverable, we create benchmarks that both humans and machines can use to evaluate work. Many leaders can rattle off these guidelines. Again, imagine training a new team member and explaining what makes your company unique, and what you’re looking for when you review their work.
This is where AI becomes more than a writing or ideation tool. It becomes an editor, a reviewer, and a guide. These quality guidelines, or scoring rubric, can again be created and shared with our teams and the AI tools we use. Once we’ve shared our expectations, we can ask AI to grade our content against those goals. Did it meet the brief? Was it on-brand? Was it clear and actionable? Did it meet the goals we stated, or the target audience we intended? Is it unique enough or does it share a point of view?
Even more useful, AI can identify where a piece of content falls short and suggest how to fix it. If it didn’t have enough context to do the job well, it can prompt us to go back and supply the missing pieces. In this way, AI becomes a mirror, helping us see our own blind spots in process and communication.
Our AI processes include a scorecard against our quality standards, checks itself, and recommends changes, which might often include asking our creators to add more unique perspective, more clearly communicate a concept, or provide better ingredients. Yes, AI can and will create that article for you given what you’ve supplied, but it should also push back and tell our users what they should be doing better.
For this to work, though, quality guidelines must be documented, accessible, and kept up to date. Hosting rubrics and content expectations in a shared space, whether it’s a team wiki, style guide, or enablement platform, ensures that both human team members and AI agents are referencing the same source of truth. That consistency is critical for scaling quality across teams and projects.
The introduction of AI can also be a valuable moment to revisit old guidelines. Many content and marketing teams have documentation that hasn’t been reviewed in years, or it is living in the heads of team members but not documented. The push to integrate AI gives teams a reason to reevaluate what best practices should be, and to update their expectations to reflect both current standards and future capabilities.
Brainstorm, Test, and Iterate with Purpose
AI’s value isn’t just in automation. It’s in speed and volume. When you need five headline ideas, AI can give you 50. The challenge is using that abundance strategically.
By establishing a strong foundation, clear prompts, defined goals, and a shared understanding of quality, we can use AI to create multiple options that reflect our objectives. We can ask for specific variations or versions of elements like titles, headlines, or social copy. The key here is that we purposefully build in human stages to review recommendations and select the best options. We can test , iterate, and make informed decisions faster.
This is especially useful for teams that rely on high-volume content creation or constant experimentation. Instead of starting from scratch each time, they can use AI to generate consistent, on-brand variations and spend their energy and creativity making the final decision or adapting the recommendations.
The key is not to accept the first output as the final product, but to treat AI as part of the creative cycle: brainstorm, draft, revise, and refine. Team members should feel empowered to own the final product, not just pass along what AI created. That ownership includes the responsibility to evaluate quality, make human-centered decisions, and ensure the output meets the goals and standards of the team. Like other tools, the responsibility of quality falls on the user of the tool, not the tool itself. “That’s how the AI created it” should never be an excuse for mediocre quality.
Building Long-Term AI Alignment
Using AI this way has a ripple effect. It improves outcomes today but also builds better team practices in the long run. When we codify our standards and write down what we expect, both for human team members and AI, we create shared documentation that helps with onboarding, cross-team collaboration, and process clarity. We can centralize the components that are most consistent, baking them into our standard processes, while asking team members to focus on quality of content and value of the ideas.
This kind of alignment pays off well beyond the AI use case. It helps teams stay consistent, even as people or projects change. It helps new team members ramp up faster. And it ensures that AI isn’t operating in a silo, but as part of a connected team.
The technical benefits of using AI are often clear, but the opportunity now is to use it more intentionally and use the process of adopting AI as a chance to reset, clarify, and improve quality. By treating AI like a real collaborator, one that needs context, structure, and feedback, we make smarter use of its capabilities and raise the bar for our own work in the process.