Minuttia’s AI Policy with Use Cases, Principles & More

10 min

Key Points

Cases Where We May Use AI

Our AI Policy Principles

How We Protect Ourselves Against AI

AI Detection Flaws & False Positives

Final Thoughts

This piece of content is the work of a human mind.

Almost a year ago, we shared our predictions on generative AI and our decision not to use AI in our Content Creation service.

However, we had to create a set of guidelines and include them in our policy for how our team can use AI responsibly.

In this post, we’ll be sharing:

  • Cases where we may use AI
  • The core principles of our AI policy
  • How we protect ourselves and our partners against risks associated with AI use

We hope that after reading this article, you’ll have a better understanding of our stance on AI.

Key Points

  • We use AI-powered tools like Grammarly, Hemingway, and Clearscope for content briefing and in our content creation process.
  • We don’t use any paraphrasing tools.
  • We aim for a minimum of 60% human contribution in every draft, with an average score of 80% across all content.
  • We have created SOPs for team leads to know how to handle AI-related issues.
  • Our SEO Team uses tools like ChatGPT ONLY for content instructions not for research. All research is carried out by humans for depth and accuracy.
  • We have added AI tracking to our Deliverables Tracking Sheet to monitor AI usage.
  • Drafts with 60%+ AI are subject to rewrites from our writers.

Now, let’s discuss cases where we may use AI in greater detail.

Cases Where We May Use AI

No doubt, AI is helpful.

However, we want to be completely honest about how we incorporate AI into our work for clients.


Content Briefing

When we use AI: We use AI tools like ChatGPT for generating initial outlines and content structure ideas.

This helps us develop content briefs faster and provides a good starting point that we can refine further.

When we lean back: We treat the outlines ChatGPT generates as ‘generic templates’ meaning they can’t be used directly and only serve as a baseline.

Our process ensures these AI-generated outlines are reviewed and enhanced by our SEO strategists.

This helps us ensure that every outline adheres to SEO best practices, aligns with our strategic goals, and meets our quality standards.

As our SEO Lead, Antonis Dimitriou, says:

Generating an article outline using just AI won't get the job done, especially in the modern era of SEO. Thus, while we incorporate AI frameworks, we make sure to analyze the SERP, scrutinize the competition, and consider unique information that other pages lack. We also pay close attention to information hierarchy, intent, and, of course, the target audience. These steps ensure that our content is not only competitive but also uniquely valuable to our readers.

Video Editing

When we use AI: AI tools like Opus Clip have been quite useful in our video editing workflows and generating captions.

They make it much easier and faster for us to repurpose a long video into shorter, bite-sized clips for social media.

When we lean back: We don’t use AI to create videos. Video editing is a complex and dynamic process.

While AI can assist in automating routine tasks such as audio enhancement, removing filler words, or generating captions, it lacks narrative, decision-making, and creative judgment.
Setting the right tone, choosing the appropriate pacing, and crafting a compelling story are aspects where human intelligence and emotional insight are irreplaceable.

Our team of skilled video editors brings these human elements to each project, ensuring our videos resonate deeply with viewers.

As Zacharias Xiroudakis, Content Marketing Specialist at Minuttia shares:

We use AI tools like Opus Clip proactively to ensure brand alignment as regards logo and fonts, identify relatable parts in a video, and repurpose videos for specific social platforms.

Original Content Ideas Generation

When we use AI: We use AI to help us explore new content themes and swiftly generate ideas. This technology helps us bypass the lengthy discussions typically required in traditional brainstorming sessions.

When we lean back: While AI can offer a starting point by generating numerous ideas, most of these suggestions are generic and uninspiring.

For every 20 ideas produced by AI, perhaps only one proves to be potentially valuable. So, every AI-suggested idea undergoes a rigorous evaluation process handled manually by our team.
Our experts assess each idea against a comprehensive set of criteria to ensure its viability and alignment with our objectives.

These criteria include:

  • The type of content needed
  • The target audience and their specific pain points
  • The desired perspective or angle of the content piece
  • Alignment with our brand values and business goals
  • Consistency with our broader content marketing strategy

As our Content Marketing Strategist, Milica Radovanovic, says:

AI gives us a springboard for creativity - it speeds up our ideation processes considerably - but it is in no way enough to create content that resonates with readers. It is up to our team to transform raw AI suggestions into content pieces that deliver true value.

Ideation and Brainstorming

When we use AI: We can use AI tools to help us gather insights that serve as a foundation for our research.

When we lean back: The keyword here is ‘foundation’ not the final piece.

We deeply analyze and investigate whatever AI throws our way to ensure that the depth, accuracy, and contextuality we’re trusted for aren’t compromised.

As Antonis Dimitriou elaborates:

Generative AI can be greatly used to identify audience pain points and interests and from there, you can use this info to come up with queries or topics for your content strategy.

Graphic Design

When we use AI: In our graphic design process, AI does not play a direct role in creating final visuals.

Instead, our design team handles every aspect of graphic creation from start to finish, ensuring high-quality, custom designs for each project’s unique requirements.

When we lean back: Our design services aren’t just about creating visually appealing designs but also about how well they integrate with our content marketing and SEO expertise.

So, our designers engage in a thorough end-to-end process, from wireframing and prototyping to creating visual assets like icons, illustrations, and infographics.

While AI tools might offer basic layouts or color schemes during the ideation phase, these elements are merely starting points and are not directly used in our design outputs.

Our AI Policy Principles

Below are the principles that guide our integrity, maintain the quality of our work, and respect the trust we have built with our clients.

  • We prioritize our human team members. This means that we only use AI tools (and allow AI tools to be used) to support their skills, not replace them. We want to create a work environment where technology serves people, and not the other way round.
  • We create content that’s original. AI tools like ChatGPT and others are built on human data that has existed before. This means it’s quite impossible to come up with original or unique concepts. As a result, it’s against our standards to use AI to create standalone content.
  • We have guardrails for AI use. We only use AI for processes that won’t affect or pose any risks to our brand integrity, performance, or quality which our clients trust us for. So, everything we do with AI is done within controlled environments.
  • We use only approved tools. We aren’t ignoring AI but we’re very strict and responsible in what we allow to touch our client’s work. Currently, we use Grammarly for grammar checks, Hemingway for improving readability, and Clearscope for SEO optimization. Each of these tools was chosen based on its reliability and suitability for our standards.
  • We take our clients’ data privacy seriously. We don’t train AI tools with any first or third-party data from our clients. This strict policy prohibits anyone within the team from sharing confidential or proprietary client information.
  • We’re transparent about our stance on AI. As we continue to assess and update our policies, we ensure everyone on the team is on the same page and uses AI responsibly.

How We Protect Ourselves Against AI

Just like you would treat any other risk, our approach towards AI is calculated and filled with a lot of caution.

Since integrity and high-quality work matter to us, here are the ways we protect ourselves (and our clients) against AI:

  • Our AI Policy: Our stance on using AI sets the foundation for how we integrate AI into our processes at Minuttia.
    Our AI policy clearly specifies the contexts in which AI tools can be used, without compromising our standards.
    By setting these strict guidelines, we aim to prevent any potential abuse of AI technologies.
  • Legal Agreements: To enforce our AI policy, we enter into legal agreements with all our content writers and ensure our rules concerning AI use. These agreements help us maintain control over content authenticity and safeguard our clients’ interests.
  • AI Detection Software: We use advanced AI detection tools such as Originality AI and Quetext to identify the extent of AI use in content creation.

However, we do not treat the results from these tools as absolute. While Originality AI and Quetext are sophisticated tools, they still have their flaws.

So, we don’t use detection scores as a straightforward measure of AI versus human input.

For instance, a score showing 30% AI doesn’t mean that 30% of the content is AI-written. Instead, it suggests a 30% likelihood that some form of AI was used.

We combine insights from these tools with manual reviews and the professional judgment of our experienced editors.

This balanced approach has proved useful in helping us maintain high standards for our clients while responsibly embracing the benefits that AI tools can offer.

AI Detection Flaws & False Positives

Tools like Grammarly and Clearscope have been useful in helping us simplify our content creation process.

Yet, their use has raised important considerations in AI detection tools.

Originality AI, for example, can sometimes misinterpret the content improvements made by Grammarly and Clearscope as AI-generated content.

This misunderstanding can lead to false positives — where human-written content is wrongly flagged as AI-produced.

Despite the tool’s accuracy, approximately 2% of tests result in false positives. These false positives occur under various scenarios:

  1. AI-Generated and Not Edited: Flagged as AI-generated.
  2. AI-Generated and Human Edited: Still flagged as AI-generated, as the base content was AI-produced.
  3. AI Outline, Human Written, and Heavily AI Edited: Considered AI-generated due to significant AI involvement.
  4. AI Research and Human Written: Recognized as original human-generated content.
There have been recent criticisms, like those from Ars Technica and studies from Cornell University. These sources argue that AI detectors often fail to deliver accurate results, and some researchers even refer to them as "snake oil."

Given these challenges, we adopt a balanced view of AI detection tools.

We use them to aid our editorial process but remain cautious of their limitations.

So, our strategy includes using these tools as part of a broader assessment structure, where multiple checks and human oversight take the lead.

How to avoid false positives

Our team recommends these 8 tips to help reduce false positives:

  1. A detection score of 60% Original and 40% AI isn’t a false positive. It reflects a 60% confidence level that your content is original.
  2. Create all articles in Google Docs (whenever possible) so that you can use Originality AI’s free Chrome Extension to help you prove your content is Original.
  3. Editing AI-written content isn’t a false positive; it‘s a true positive.
  4. Having AI edit your work isn’t a false positive; it’s a true positive.
  5. When any amount of AI touches the content, it can cause the entire article to be flagged as AI content.
  6. “Cyborg” writing, where multiple AI tools are used to create an outline, suggest edits, and optimize the content, can increase the chance of a higher AI score.
    So, we use a free content optimizer tool similar to SurferSEO or MarketMuse but 100% free that doesn’t use AI to reduce the chance of a false positive.
  7. Unusual content formatting can reduce the accuracy of the detector tools, causing an increase in false positives or false negatives.
  8. The shorter the text, the less accurate the detection score. We recommend checking at least 100 words at a time.

As Nikola Djordjevic, Content Lead at Minuttia, shares:

AI detection is a contested and grey area. Still, by using AI detection tools judiciously and interpreting the results with care it’s possible to come to a reasonable conclusion on whether or not someone is using services like ChatGPT to generate text. This way, we can ensure that our deliverables are indeed written with the creativity and depth that only human beings are capable of.

Final Thoughts

Our cards are on the table: we don’t depend on AI to create content or do our work for us.

Every AI-assisted work passes through human hands and scrutiny — twice, at least.

Plus, we’re always on the lookout for ways to do things better.

So, if there’s something out there that can improve our work and make our clients happier, we’re on it. But we tread carefully and won’t compromise on quality.

Seeking a partner who values quality, human-written content, and the thoughtful application of technology?

Schedule a call with our team’s experts.

This piece of content is the work of a human mind.

Related Content

Adaptive Content Marketing Newsletter

Join our biweekly newsletter and learn how to adapt to industry changes, redefine your content marketing playbook, and drive sustainable growth.