Why Tech Titans Are Hitting the Mute Button: 15 Core Reasons Giants Are Retreating from Voice Assistant Projects

Lifestyle Technology
Why Tech Titans Are Hitting the Mute Button: 15 Core Reasons Giants Are Retreating from Voice Assistant Projects
Why Tech Titans Are Hitting the Mute Button: 15 Core Reasons Giants Are Retreating from Voice Assistant Projects
Science and Technology Wallpapers – Top Free Science and Technology Backgrounds – WallpaperAccess, Photo by wallpaperaccess.com, is licensed under CC BY-SA 4.0

Alright, let’s talk about something that was supposed to revolutionize our lives, something that promised to be the ultimate hands-free future: voice assistants. For years, the biggest names in tech poured billions into developing and deploying these digital helpers, from Amazon’s ubiquitous Alexa to Google Assistant, Apple’s Siri, Microsoft’s Cortana, and Samsung’s Bixby. We were all told to get ready for a world where our every command would be met with an instant, intelligent response. Yet, if you’ve been paying attention, you might have noticed a subtle, yet significant, shift in the air.

It seems the titans of Silicon Valley are quietly, or sometimes not so quietly, hitting the mute button on many of their ambitious voice assistant projects. What was once heralded as the next great computing platform is now facing a harsh reality check. This isn’t just about a few minor tweaks; we’re witnessing a strategic retreat from some major players, signaling a fundamental reassessment of where voice technology truly fits in their expansive empires. And honestly, for anyone who’s been following the digital landscape, it’s a fascinating and incredibly telling development.

So, what’s really going on behind the scenes? Why are these tech giants, who once championed the voice-first future, scaling back their ambitions? It turns out there isn’t a single, simple answer, but rather a complex web of strategic missteps, market realities, technological hurdles, and shifting user behaviors. We’re going to dive deep into 15 compelling reasons why some of the biggest names in tech are rethinking their voice assistant investments, starting with the more specialized, enterprise-focused corners of the market.

Voice Command Systems
The Power of Intention: The Secrets your Voice Reveals – Vocal Impact, Inc., Photo by vocalimpact.net, is licensed under CC BY-SA 4.0

1. **Voice Biometrics as a Niche Market for Big Tech**One of the initial areas where some tech giants, specifically AWS, Google, and Microsoft, began their retreat was from the voice biometrics market. Now, you might be thinking, isn’t voice biometrics super cool and futuristic? It definitely sounds like it, offering secure authentication through the unique characteristics of a person’s voice. However, the reality on the ground, especially for call centers, revealed a different story: it’s always been a niche market.

This isn’t to say the technology isn’t useful, but rather that its application as a broad product for massive cloud providers proved challenging. For these enormous companies that thrive on scale and widespread adoption, a specialized market, no matter how technologically advanced, might not move the needle enough. When a product is only ever going to serve a small segment of businesses, the return on investment for a global tech behemoth starts to look less attractive.

Consider the operational focus of these cloud giants. They excel at providing scalable, generalized services that can be adopted by millions of users or thousands of businesses with minimal customization. Voice biometrics, particularly for high-stakes environments like financial institutions, often demands a more tailored, ‘high-touch, professional services approach,’ which simply doesn’t align with their core business models. It’s like trying to fit a bespoke suit into a ready-to-wear production line – it’s just not efficient for their scale.

smartphone, internet, technology, modern, online, digital, internet icons, world wide web, cellphone, mobile phone, cellphone, cellphone, cellphone, cellphone, cellphone
Photo by geralt on Pixabay

2. **The High Cost and Complexity of Implementation for Businesses**Beyond just being a niche, implementing voice biometrics solutions comes with a hefty price tag and significant complexity for businesses. While the allure of enhanced security and streamlined authentication is strong, many companies, when faced with the actual costs, simply balk. It’s not just the initial software license; it’s the integration with existing systems, the training of staff, and the ongoing maintenance that add up.

These costs quickly become prohibitive for all but the largest enterprises or those with very specific security needs. For the average business, the investment required for voice biometrics often outweighs the perceived benefits, especially when there are alternative, albeit sometimes less secure, authentication methods already in place. It’s a classic case of cost-benefit analysis leading to a ‘no-go’ decision for many potential adopters.

Furthermore, the complexity of deploying such advanced systems often requires specialized expertise, which smaller businesses might not have in-house. This necessitates relying on external consultants or dedicated professional services teams, adding another layer of cost and management overhead. For tech giants aiming for widespread adoption of their tools, a product that’s difficult and expensive for their target customers to implement is a significant barrier to success.


Read more about: Navigating the Crossroads: Key Legal and Policy Shifts Redefining the Trucking Industry in 2025

3. **Navigating the Minefield of Privacy Laws and Regulations**If there’s one thing that keeps businesses up at night in the digital age, it’s privacy. And when you’re dealing with something as personal as a person’s voice, the privacy concerns become exponentially more intense. Abiding by privacy laws is a monumental challenge for companies looking to implement voice biometrics, acting as a major deterrent for many.

Regulations like GDPR, CCPA, and countless others around the globe impose strict rules on how personal data, especially biometric data, can be collected, stored, and processed. Non-compliance can lead to massive fines, reputational damage, and a loss of customer trust – risks that many businesses are simply unwilling to take, particularly for a ‘marginal product’ like voice biometrics.

This regulatory landscape is constantly evolving, making it a moving target for compliance. For tech giants, providing a voice biometrics solution means not only ensuring their own compliance but also empowering their client businesses to navigate this complex legal terrain. The sheer legal and ethical overhead involved in managing this sensitive data can easily outweigh the potential revenue, making it a strategic headache rather than a lucrative opportunity.

Embracing and Leading Innovation
A Strategy for Embracing Uncertainty – Adaptable, Photo by bain.com, is licensed under CC BY-SA 4.0

4. **Misalignment with Core Operating Models and Business Strategies**This reason cuts to the heart of why some voice assistant projects, particularly those related to biometrics, just don’t fit into the grand scheme of things for tech giants. Microsoft, for instance, which once held a dominant market share with its Nuance Gatekeeper offering, found that this ‘marginal product’ didn’t quite gel with its broader operating model. Their model doesn’t favor the ‘high-touch, professional services approach required for a successful deployment’ of such a specialized solution.

Think about it: companies like Microsoft thrive on selling widely adopted software and cloud services that can be deployed with relatively less direct, intensive human intervention from their end. Voice biometrics, especially for large enterprise clients, often demands bespoke solutions, deep integration support, and ongoing consultative services. This kind of hands-on engagement is resource-intensive and deviates from the scalable, largely automated sales and support models preferred by these tech behemoths.

When a product requires a completely different sales and deployment strategy, one that doesn’t leverage existing strengths or infrastructure, it becomes an outlier. These giants have core businesses that drive their primary revenue and strategic direction. Projects that don’t neatly align with these established models often struggle for internal resources and executive sponsorship, eventually leading to a gradual retreat or outright abandonment.


Read more about: In the Boardroom: Decoding 15 Critical Control Mechanisms for Auto Executives in Joint Ventures and Venture Deals

layoffs this year
Google’s Billion-Dollar Layoff Strategy Amid Record Revenues, Photo by googleusercontent.com, is licensed under CC Zero

5. **Reduced Internal Investment and Shifting Priorities**It’s a tough truth in the tech world: if a project isn’t delivering expected returns or fitting into the broader strategic vision, investment will inevitably dwindle. We’ve seen this play out dramatically across the board. Microsoft, for example, has significantly ‘reduced its voice biometrics product team’ and ‘rolled most sales into its Dynamics Contact Center 365 platform,’ indicating a clear shift in focus.

This isn’t a one-off. Google has ‘reduced investment in Assistant,’ and Amazon’s ‘10,000 person layoffs had an outsized effect on the Alexa team.’ Such large-scale reductions signal a clear reprioritization away from these voice-first initiatives. When the original teams depart and ‘institutional knowledge has fallen away,’ as observed regarding Nuance Gatekeeper, the product’s future becomes incredibly precarious. It’s hard to innovate and compete when your champions and experts are no longer at the helm.

These decisions often reflect a candid assessment of market potential versus resource allocation. With the intense competition and constant need for innovation in other core areas like cloud computing, AI platforms (beyond voice), and advertising, resources are finite. When voice projects fail to meet internal metrics for growth or strategic importance, they become targets for budget cuts, ultimately leading to their decline.


Read more about: Luxury’s Reckoning: 15 Brands Battling Inventory Crises and the Specter of Failure

The Innovation Binge as a Driver of Hope
A Complete Guide to Innovation Strategy In Business – Welp Magazine, Photo by welpmagazine.com, is licensed under CC BY-SA 4.0

6. **The Elusive Quest for Deep Product-Market Fit**Achieving ‘deep product-market fit’ is the holy grail for any tech offering, and it proved particularly elusive for AWS and Google in the voice biometrics space. As noted by Matt Smallman, these companies ‘produced ‘good enough’ products at unbeatable price points but without the customization, fraud watchlists, or synthetic speech detection needed by top-tier banks.’ This observation highlights a critical gap.

While their offerings were accessible and affordable, they lacked the specialized features and robust capabilities required by demanding, high-value customers like top-tier banks. These institutions aren’t just looking for ‘good enough’; they need cutting-edge, highly configurable solutions that can withstand sophisticated fraud attempts. The generic, scalable solutions offered by cloud giants simply couldn’t meet these nuanced, sector-specific requirements.

Without this deep fit, these products remained on the periphery for critical enterprise clients, never truly becoming indispensable. It’s a stark reminder that in competitive, specialized markets, a broad, cost-effective solution often isn’t enough. Customers demand tailored excellence, and if the tech giants can’t provide that without significant, non-scalable investment, they’re better off stepping away.

Strategic Sponsorships: Fuelling His Fortune
Strategic Analysis – Overview, Examples, Levels of Strategy, Photo by corporatefinanceinstitute.com, is licensed under CC BY-SA 4.0

7. **Escalating Reputational Risks from Synthetic Speech and Deepfakes**The rise of sophisticated synthetic speech technology and deepfakes has introduced a new and alarming layer of risk, particularly for voice biometrics. As this technology has evolved, so has the chatter around fraudsters being able to breach voice biometrics systems. This isn’t just a theoretical concern; it’s a very real and growing threat that has directly impacted the decisions of tech giants.

OpenAI, a leader in AI, even ‘warned businesses to take steps ‘like phasing out voice-based authentication’ ahead of the launch of its voice cloning tool.’ While that specific tool’s release was delayed, the warning itself underscores the severe vulnerability. The ease with which synthetic speech can now mimic human voices creates an unacceptable ‘reputational risk of a compromise’ for any major tech company providing authentication services.

For AWS and Google, the growing difficulty of detecting synthetic speech made it ‘easier for them to step away’ from their voice biometrics offerings. The potential for a high-profile breach, where their technology is exploited by deepfakes, could inflict immense damage on their brands and user trust. In a world where AI can flawlessly mimic human speech, the integrity of voice-based authentication systems becomes a critical liability that many tech giants are no longer willing to shoulder.

Alright, buckle up, because if the first section detailed the strategic and technical hurdles, this next part is all about what really got under our skin as users and messed with the bottom line for these tech titans. We’re diving into the everyday frustrations, the trust issues, and the cold hard cash problems that ultimately cooled off the voice assistant craze for general use. These are the reasons why our once-beloved digital helpers are now gathering dust or facing the quiet axe from their creators.

Nagula's feedback strategy
Strategy – Tablet Dictionary image, Photo by thebluediamondgallery.com, is licensed under CC BY-SA 4.0

8. **Pervasive Usability and Discoverability Challenges**Let’s be real, this might just be the biggest reason voice assistants struggled to go mainstream beyond simple tasks. It’s just too darn difficult to use them for anything more than a handful of purposes, and the core culprit is a massive lack of discoverability. Think about a well-designed webpage: you immediately see where to click for pricing, or how to download an app. There’s no ambiguity, no guessing game.

But with voice? It’s a whole different ballgame. If you wanted pricing info, would you ask, “Tell me about pricing,” or “What is the cost?” Your success hinges entirely on whether the voice app builder anticipated your exact phrasing and trained the natural language understanding (NLU) to handle it. Anyone who’s spent more than five minutes with a smart speaker knows the soul-crushing frustration of hearing, “Sorry, I can’t do that,” for the umpteenth time. It’s like talking to a brick wall that sometimes talks back, but only on its terms.

The interface itself often obscures what you can even do, making it impossible to discover new functionalities. Imagine a “Publish” button on a screen – you see it, you know it’s an option. With voice, that visual cue is completely absent. While there are clunky ways to inform users, like, “By the way, did you know you can…,” these often pop up when you’re not ready for them, overloading you with information you neither want nor need in that moment. It’s not intuitive, and it breaks the flow of interaction.

Compare that to a mobile app, which is a powerhouse of discoverability. You’ve got an icon on your screen constantly reminding you it exists, ready for a tap. That’s why companies push you towards their apps even when a mobile website would suffice – the persistent visual presence drives engagement. For voice, the lack of an equivalent discovery mechanism meant users typically only remembered and utilized a few basic commands, leading to its limitation as a broad platform, even if it remained handy for first-party tasks like playing music or turning off lights.

9. **Persistent Speech-to-Text and Natural Language Understanding Errors**Adding insult to injury, these pervasive usability issues were severely compounded by fundamental technological hiccups in the underlying speech-to-text (STT) and natural language understanding (NLU) systems. Every interaction felt like a roll of the dice: would the assistant actually understand what you were saying this time? It was a constant source of anxiety, making many users simply give up on trying to use voice for anything beyond the bare minimum.

This wasn’t just one problem, but a tricky two-part challenge. The STT, which converts your spoken words into text, was handled at the platform level, but the NLU – the system that interprets the *meaning* of those words – often required configuration by individual developers. This placed a huge burden on them, demanding they possess deep NLU knowledge and an almost psychic ability to predict every possible user request before their app even launched.

Furthermore, requests typically involve two key components: what a user *wants to do* (the intent) and *with what* (the entities). When there are only a few options and a wealth of data on user request patterns, the experience can be pretty smooth. But introduce more complexity, or ask for something a bit more obscure, like the name of a niche artist, and the system would often fall apart. This explains why first-party experiences, with their limited scope and controlled environment, could be more enjoyable, yet still prone to failing when pushed outside their comfort zone. Even with the incredible advancements we’ve seen in large language models (LLMs) like ChatGPT, which are vastly superior at understanding intent, the earlier NLU limitations were a significant barrier to the platform’s success.

Colleagues discussing data and strategy in an office meeting.
Photo by fauxels on Pexels

10. **The Absence of a ‘Killer App’ and Low Third-Party Business Adoption**All these usability and technical woes combined to prevent smart assistants from ever landing that elusive ‘killer app’ – that one must-have third-party application that would truly justify the platform’s existence and drive widespread adoption. Think of the early iPhone with its revolutionary apps; voice never found its equivalent, and that was a major problem.

Businesses, understandably, weren’t exactly lining up to build for these platforms. While some might have dipped their toes in the water, the calculus was simple: if you’re going to invest precious development resources, are you going to put them into an experience that often delivers subpar results, or into your mobile app, which is a known quantity and a more reliable channel? The choice was often an easy one, pushing development away from voice.

Adding to the challenge, Amazon and Google were noticeably late to introduce effective monetization tools for third-party developers. This meant there was little financial incentive for businesses to commit significant resources to building out robust voice experiences. Without a clear path to generating revenue, or at least a compelling strategic advantage, the motivation just wasn’t there.

For users, this created a frustrating feedback loop. Their favorite brands weren’t on the platform, and there was no “Flappy Bird” equivalent to draw them in. This meant they had less reason to actively seek out and engage with third-party voice apps, further reinforcing the lack of development. It became a vicious cycle, effectively preventing these assistants from ever truly taking off as vibrant platforms for external innovation.

11. **Amazon’s Misguided Monetization Strategy for Alexa**For Amazon, the vision for Alexa was clear from the outset: it wasn’t about selling devices at a profit, but about becoming the ultimate hands-free shopping assistant. When Alexa launched, Amazon pioneered a new business model, fully expecting users to seamlessly order products – from pizza to books – simply by speaking a command. It was meant to be the future of impulse buying, a direct pipeline from thought to purchase.

However, reality proved to be a harsh mistress. That grand vision of voice commerce never truly materialized. Despite Alexa racking up billions of interactions every week, these were overwhelmingly for simple commands and informational queries, not complex purchasing decisions. Users were happy to ask for the weather or play music, but when it came to buying, they overwhelmingly preferred to use their phones or computers, where they could see what they were ordering and review details.

This fundamental mismatch between Amazon’s monetization hopes and actual human behavior led to significant financial strain. Users like Abigail Barnes even found that Alexa’s attempts at monetization backfired spectacularly. She recounted how her device started barraging her with “frequent delivery notifications, asking her to review purchases or prompting her to reorder items,” which she found “really irritating.” These intrusive prompts actively drove users away, undermining the very goal they aimed to achieve.

Experts noted that while making money from Alexa *might* be nice, human behavior dictates that voice assistants are simply better for information and task management than as a primary shopping channel. It seems if Amazon could figure out a way to give users a “last look” on their phone before finalizing a purchase through Alexa, people might feel more comfortable changing their buying behavior. But without that, the initial strategy was, in hindsight, a colossal misjudgment.

A strategic arrangement of colorful pawns connected on a game board, symbolizing networking and teamwork.
Photo by Pixabay on Pexels

12. **Significant Financial Losses Within Voice Assistant Divisions**If you want to know why big tech is hitting the brakes on voice, just follow the money – or, in this case, the lack thereof. The financial hemorrhaging from these ambitious voice assistant projects became too significant to ignore. Amazon’s hardware group, which includes Alexa, was reportedly losing a staggering $3 billion in a single quarter in 2022. And no, that wasn’t just Kindle taking one for the team; Alexa was a major contributor to those losses.

The numbers only got bleaker, with projections indicating Amazon was set to lose an eye-watering $10 billion from the Alexa division within a single year. These kinds of figures don’t just get shrugged off; they trigger massive strategic re-evaluations and, unfortunately, lead to widespread job cuts. We saw Amazon’s massive 10,000-person layoffs have an “outsized effect on the Alexa team,” a clear sign of severe reprioritization.

Google Assistant, while perhaps never reaching the same dizzying investment heights as Alexa, also saw its fortunes wane, with Google significantly reducing investment in the team. The end of 2022 and early 2023 was widely described as a “bloodbath” for many tech companies, but it was particularly brutal for the Amazon Alexa and Google Assistant teams, signaling a collective retreat from an unsustainable financial drain. When the cost-benefit analysis skews so heavily into the red, even tech giants with seemingly bottomless pockets have to pull the plug.


Read more about: The Algorithmic Battlefield: A Deep Dive into the US-China AI Arms Race and the Urgent Call for Global Governance

Hand analyzing business graphs on a wooden desk, focusing on data results and growth analysis.
Photo by Lukas on Pexels

13. **Widespread Consumer Privacy Concerns and Eroding Trust**While some might argue that initial adoption wasn’t hampered by privacy fears, a deeper look reveals that widespread consumer privacy concerns played a significant role in eroding trust and making voice assistants less appealing over time. For many, the idea of a constantly listening microphone in their home or on their person raised an uncomfortable red flag, even if they initially tolerated it for convenience.

Multiple reports highlighted this unease. A 2020 survey revealed that a striking 82% of respondents harbored concerns about data collection from devices like voice assistants. These fears were not unfounded. In 2019, Bloomberg dropped a bombshell, reporting that Amazon actually employed thousands of people to listen in on conversations recorded through Alexa. The Guardian followed suit, uncovering that Apple had shared private Siri conversations with contractors, too. These revelations, understandably, sparked outrage and a profound sense of intrusion among users.

Users like Abigail Barnes vividly described how she “became concerned about conversation data ‘being stored in a cloud somewhere.'” This sentiment was echoed by Helen Jambunathan of Canvas8, who noted that voice assistants “have never shaken connotations of invasion and intrusion.” Beyond just eavesdropping, there were also “several high-profile instances of voice assistants being creepy, racist and giving dangerous advice,” further damaging their credibility and fostering a sense of mistrust. These cumulative privacy breaches and ethical lapses made it increasingly difficult for the public to feel truly comfortable with these devices, contributing significantly to their waning appeal in the broader market.


Read more about: Beyond the Dashboard: Unmasking the Shocking Data Collection Habits of Your Connected Car

Close-up of a chess game with a focused player, emphasizing strategy and competition.
Photo by Pixabay on Pexels

14. **Unreliable Voice Commands Leading to User Frustration**Imagine wanting to simply turn off the lights before bed, only to have your voice assistant play deaf multiple times. This isn’t a hypothetical scenario; it was a common, infuriating reality for many users, directly contributing to the decline of voice assistants in general use. Abigail Barnes, a former Alexa enthusiast, perfectly captured this sentiment, explaining that her “voice commands became unreliable.”

She vividly recounted her exasperation: “I stopped asking her to turn off the lights when I went to bed, as I’d ask a number of times and then manually turn them off anyway.” What was initially conceived as a time-saving marvel quickly transformed into a frustrating time-sink. This isn’t just a minor inconvenience; it’s a fundamental breakdown in the user experience when a device fails to perform its most basic functions consistently.

This unreliability is intrinsically linked to the persistent speech-to-text and natural language understanding errors we touched upon earlier. When every interaction feels like a “game of chance,” where you’re unsure if your request will be understood, trust erodes rapidly. Users, faced with the choice between repeating commands endlessly or just doing it themselves, overwhelmingly chose the latter. The promise of effortless interaction gave way to the reality of constant annoyance, pushing voice assistants into the dreaded drawer of technological disappointments.


Read more about: Beyond the Hype: 14 Car Tech Features That Drive Drivers Absolutely Bonkers (and Make Us Miss Simple Dials)

Group of young professionals engaged in a collaborative meeting in a modern office setting.
Photo by Fox on Pexels

15. **The Overall Trend of Slowing Adoption and Decreased Use**While initial adoption rates for smart speakers were undeniably healthy, reaching tens of millions of users in key markets, the momentum simply didn’t last. The once booming years for voice assistants gave way to a sobering reality: a clear trend of slowing adoption and, for many, a noticeable decrease in their overall use. Despite Amazon touting “billions” of interactions per week, these were, as discussed, primarily simple, low-value tasks that didn’t translate into a sustainable growth model.

Voicebot Research observed that fewer Americans were actually using voice assistants overall, contradicting some of the earlier, more optimistic narratives. Another report indicated that voice assistant use had been falling over the past three years, with the adoption of smart speakers also slowing. Helen Jambunathan from Canvas8 articulated this perfectly, stating that voice assistants “have not become as socially sticky as promised.” They simply didn’t integrate into our daily lives with the pervasive necessity that platforms like smartphones achieved.

This slowing adoption, combined with the other issues we’ve discussed – from usability challenges and monetization misfires to privacy concerns and financial losses – created a perfect storm. It signaled to tech giants that the general-purpose voice assistant, as a platform for widespread third-party development and significant revenue generation, was not fulfilling its grand promise. The market had reached a plateau, and for companies that thrive on explosive growth and ecosystem dominance, a plateau is often just another word for failure.

***

So, what does this all mean for the future of voice, and even natural language interfaces in general? It’s clear that the grand vision of a universal voice-first platform, where we command our entire digital lives through spoken words, has largely fallen silent. The dreams of voice assistants becoming the next iOS or Android, hosting a thriving ecosystem of third-party apps, have largely gone unfulfilled. The challenges of discoverability, NLU errors, and the absence of that ‘killer app’ proved too formidable for a general-purpose agent.

However, this isn’t necessarily the death knell for voice as an input, nor for specialized voice AI. As we’ve seen, voice biometrics will continue to be a powerful component of multi-factor authentication systems, with companies like Pindrop, Daon, and Veridas actively innovating to stay ahead of threats like synthetic speech. Moreover, voice assistants have found incredibly meaningful niches, particularly in areas like healthcare. For individuals with disabilities, dementia, or mobility challenges, Alexa can be a “game changer,” enabling everything from controlling devices to connecting with loved ones, offering a deeply impactful first-party experience that truly transforms lives.

Companies like SoundHound AI are also carving out success by focusing on embedded, multilingual, and brand-ownable voice platforms tailored for specific industries like automotive and enterprise customer service. Their Speech-to-Meaning® architecture, which processes locally, addresses critical needs for privacy, customization, and low latency that cloud-first giants struggle to meet. This vertical specialization and focus on “first-party experiences” – where the voice interface is deeply integrated into a specific product or service – appears to be the path forward. Even with the rise of powerful LLMs like ChatGPT, which are vastly improving NLU, experts predict that chat as a platform might face similar struggles to voice. The inherent challenges of discoverability and creating a general-purpose agent persist. The revolution isn’t failing; it’s simply evolving into more focused, integrated, and, frankly, more sensible applications. The future isn’t about *a* voice platform, but about *many* voice inputs serving very specific, impactful purposes.

Leave a Reply

Scroll top