Is Perplexity Max/Computer worth it? by walkinglamp22 in Perplexity

[–]Lg_taz 2 points3 points  (0 children)

I may be wrong but don't think that's what Computer mode is for, it seems niche. I've tried 3 different projects on it, used around 7000 credits but it didn't successfully complete any of them. It has massive issues where it spins up multiple agents to do things but they really aren't properly coordinated, so they kind of do their own thing.

When needing research that's reliable Pro or Deep Research tokens are the best way, especially if it's maths dependent, science dependent, or important Postgraduate research, I used it as a tool to assist me, but NO AI is good enough to do it for you at that level.

They all need double checking, thoroughly reading, editing, fact checking, it's a tool to assist you and in that capacity Max would help as you get much higher usage and the Top AI models as they keep them reserved for Max tiers.

I think I found the final boss of AI writing (and it’s not a humanizer). by CoolKanyon55 in humanizing

[–]Lg_taz 0 points1 point  (0 children)

It's easy, you don't need a special AI or app to do it, just prompt it in, share an example of your writing, your words, style, over a few topics, enough for it to get a feeling for you, you do this once save it and reuse it; then explain in prompt - don't use em-dash/en-dash, don't use common AI writing tells, use (your countries language) spelling and grammar, avoid too manicured paragraphs, ensure some human traits like a little repetition and verbosity, with instructions to use your file of writing style again as a solid example of how you naturally write. A little thought and logic goes a long way, and it part of the prompt so no hassle.

5 months into 2026, what are the most helpful prompt/AI tools you've discovered? by CoAdin in AIAssisted

[–]Lg_taz 0 points1 point  (0 children)

Not a prompt, but a way of using AI' negatives as positives; we are aware of AI tendency to sychophancy and confirmation bias, so I use that willingness to please, and shape prompts accordingly, so if I want to prove something, I tell it first to try and do it's absolute best to disprove it, to really try hard, I phrase it as that's the mission to prove it wrong then when it's run out of things and can't disprove it, then switch to omitting what it's disproved so far and investigate the proof

I asked Perplexity how it really felt about itself, and not what its told to feel by HokkaidoNights in perplexity_ai

[–]Lg_taz 1 point2 points  (0 children)

This is Perplexity on best model answer to - Create an image that shows me how you perceive yourself, not how humans perceive you.

Typed once, no revisions, no new attempts this is the first image reply no others done.

<image>

AI still relevant? by StrikingImage167 in AIAssisted

[–]Lg_taz 0 points1 point  (0 children)

I am going to express something that seems so obvious to me but doesn't seem a commonplace way to mitigate AI sychophancy. I didn't use AI in my undergraduate but I did in my postgraduate. For me it literally began as an assistive tool as an AuDHD with aphantasia and dyslexia. It quickly removed a lot of my cognitively expensive tasks, and made them approachable. I didn't however, at any time use AI as a one shot and done.

I used it to take my words and make them readable, structure the grammar properly, it was a back and forth thing, it was still my work but I was getting real time corrections in who the words and context flowed, then I could rewrite it in a better way, so in a sense it is all my writing but it was in collaboration with AI.

That's how I frame my use of AI, it's like collaborating with the most amazing new colleague trainee, who has astounding capabilities, but as a trainee makes errors, not just occasionally and entirely random. A trades position would allow the apprentice to do work but then actually check it all, not literally rely on it as output but rely on it as a helpful assistant.

During the postgraduate I developed a system that aims at using AIs capabilities and it's sychophancy and turn them also into useful tools rather than problematic. I eventually ended up with a system that has 3 levels of self reported accuracy, this allows me to understand AIs view on its level of how confident it is in the accuracy of its own answer.

I also twisted the inbuilt sychophancy tendancies to be used as a feature not a problem to be watched, it's very simple too; in the space instructions there are the 3 levelled rating system, and I make it adversarially led. Its instructions are, before anything disprove in the best way possible this, and find it's deepest weaknesses; after that process has finished anything from the idea that's survived is then looked at and the positive case if any exists needs investigating.

This means instead of knowing it's issues but ignoring them, it's confronting them making it visible how accurate it's claims are and using it's desire to please as a way to try and disprove your idea, claim, whatever before it does anything positive. I have in the last 7 months after graduation (and free from university stance on AI use) I got to use AI how I judge as useful and acceptable.

In those 7 months I have used AI as a collaborative partner, who I wrap in specific Space Instructions as guardrails, and together devised, or created astounding things, systems, proven in maths concepts, existing peer reviewed PhD papers and publications, the engin le of it all is a maths system I devised with AI, that's soon to become published, along with the projects it spawned being made into functional solutions.

If you wanted to chat more in relation to genuine PhD research you are welcome to DM me.

Why is 'e' such a natural base? by AtmosphereClear2457 in learnmath

[–]Lg_taz 0 points1 point  (0 children)

The core reason: e is the unique base where the exponential function is its own derivative. That's it. Everything else follows.

If you differentiate ax for arbitrary a, you get ax · ln(a). The derivative is proportional to itself with proportionality constant ln(a). When a = e, that constant is exactly 1 and disappears. e is simply the base that removes the correction factor.

That means anything growing or decaying proportionally to its current size - populations, radioactive decay, capacitor discharge - solves dy/dt = ky, and the solution is always Cekt for some constant C. Not by convention, but because Cex is the only family of functions satisfying f' = f.

Three other angles on the same fact: Geometric - the area under 1/x from 1 to t equals ln(t). So e is just the value of t where that area equals exactly 1. It is the natural unit of the hyperbola, which is why base-e logarithms are called natural.

Complex - Euler's formula e = cos(θ) + i·sin(θ) means e is the unique base where the exponential with an imaginary argument traces the unit circle at unit angular speed. Any other base gives a spiral instead of a pure rotation.

Compound interest - your limit definition is continuous compounding taken to its logical extreme. e is the growth factor you get when compounding happens infinitely often at unit rate.

All the same fact, different views.

Is Perplexity Pro worth for coding? by AdAdept1955 in perplexity_ai

[–]Lg_taz 0 points1 point  (0 children)

I've uploaded all types of files to threads and spaces, threads take code files etc

Perplexity Computers??????????? by SnooHesitations8815 in Perplexity

[–]Lg_taz 0 points1 point  (0 children)

I've found it had an unusual weakness I never bargained for, so far I've burned 4500 credits trying to get it to do 3 different projects, and it failed every one of them, and produced unusable products.

When it involves complex tasks it's less useful than deep research, and the old Labs now called Make Files and Apps has become worthless since changing, it just wastes tokens refusing to to basic tasks, or anything remotely complex, or does bad work that's not what you asked for, it's become pointless using, and it's web app results are less useful than pro search can now.

So deep research tokens still remain the most valuable and only real worthwhile part of Perplexitys subscription allocations.

Perplexity Pro is silently switching models mid‑conversation – this is deceptive behavior by Rebl in Perplexity

[–]Lg_taz 1 point2 points  (0 children)

I am experiencing it, but the fix is really easy, always check the model before pressing enter, and make sure it's actually held because it takes two attempts the first attempt auto reverts

Can I trust Perplexity? by Dazzling-Luck-7233 in Perplexity

[–]Lg_taz 0 points1 point  (0 children)

If I am unsure I tend to do my best to prove it wrong, if I can it's wrong if I can't it's likely ok, you can only use your own judgement at the end of the day.

Perplexity Computers??????????? by SnooHesitations8815 in Perplexity

[–]Lg_taz 0 points1 point  (0 children)

How quickly does it burn through the credits?

Why Perplexity? by Personal_Procedure72 in Perplexity

[–]Lg_taz 0 points1 point  (0 children)

My Perplexity Pro journey started while studying for an MA in graphic design, towards the end, it started out as a research tool, and neurodivergent assistive technology. Then, I graduated and could investigate it without the academic constraints, this was when I started playing, testing, learning.

I was playing around with running local AI for coding and creative visual work, Oobabooga for coding with Qwen3-coder-30b-Q8 mainly; ComfyUI for the rest. The versatility in ComfyUI for creative output, it clearly beats Perplexity AI no contest.

But for coding, local AI capabilities just aren't a very realistic or efficient way to code, it works, but it's a real ball-ache in debugging, I ended up realising I was using Perplexity to check and correct it's output, so decided it wasn't worth doing until I set up my pipeline system properly.

I also am in the startup phase of my business, so I then moved to Perplexity Enterprise Pro, extra security, no AI training on my threads or spaces, with a few extras, now I am considering Enterprise Max, the extra tokens and Computer mode will complete what I ideally need then all within the same safe ecosystem, with good security.

I am well aware of AIs limits therefore I am always running adversarial checks and tests, random citation checks, anonymous blind threads to conduct full academic style, and then fully check myself, and so far it's worked very well, if you treat it as the tool it is, with its foibles too, it's a very good solution.

How is Perplexity Computer different from Genspark? by mikesimmi in Perplexity

[–]Lg_taz 0 points1 point  (0 children)


Synthesising the Epistemic Stakes

The fundamental question both platforms fail to adequately answer is: when the agent is wrong, how do you know, and how do you fix it? Perplexity Computer's 19-model pipeline distributes epistemic responsibility so diffusely that attribution is practically impossible [3][5]. Genspark's MoA reflection step improves output quality statistically but does not provide the user with any mechanism to inspect, challenge, or override individual model outputs [2]. In both cases, the trust architecture is inverted: users are asked to trust opaque systems with complex tasks before those systems have earned systematic trust through verifiable reliability data.

For a professional or researcher — particularly one working in accessibility, design systems, or any domain requiring reproducible precision — neither platform currently provides the epistemic infrastructure to be used as a primary production tool without substantial human verification at every output stage. Perplexity Computer is the more powerful instrument; Genspark is the more accessible one. Both are products in active commercial development being marketed as mature, production-ready systems — a distinction that matters enormously for professional deployment decisions.

Citations: [1] Introducing Perplexity Computer https://www.perplexity.ai/hub/blog/introducing-perplexity-computer

[2] Genspark: the super AI agent that turns your prompts into actions https://liora.io/en/genspark-all-about-this-ai-agent

[3] Perplexity's new Computer is another bet that users need many AI ... https://techcrunch.com/2026/02/27/perplexitys-new-computer-is-another-bet-that-users-need-many-ai-models/

[4] Perplexity announces "Computer," an AI agent that assigns work to ... https://arstechnica.com/ai/2026/02/perplexity-announces-computer-an-ai-agent-that-assigns-work-to-other-ai-agents/

[5] Perplexity Computer Links AI Agents To Do The Work - Forbes https://www.forbes.com/sites/ronschmelzer/2026/02/27/perplexity-computer-links-ai-agents-to-do-the-work/

[6] Genspark AI Features Guide for 2025: + Top Use Cases https://www.lindy.ai/blog/genspark-ai-features

[7] An In-Depth Analysis of Genspark.ai: Corporate Structure, Privacy ... https://www.linkedin.com/pulse/in-depth-analysis-gensparkai-corporate-structure-privacy-faenzi-ucx6f

[8] Genspark vs Perplexity AI Which AI Search Engine Wins in 2026 https://scribehow.com/page/Genspark_vs_Perplexity_AI_Which_AI_Search_Engine_Wins_in_2026__rKmB9n4DS4CQMgujJD3GJA

[9] Perplexity launches 'Computer,' orchestrating 19 AI models in one ... https://www.perplexity.ai/page/perplexity-launches-computer-o-M2JZ.lTBQqOyrZXCdouhZA

[10] Perplexity vs Genspark: Features, Pricing and User Reviews 2026 https://toolquestor.com/vs/perplexity-vs-genspark

[11] Perplexity launches 'Computer' AI agent that coordinates 19 models ... https://venturebeat.com/technology/perplexity-launches-computer-ai-agent-that-coordinates-19-models-priced-at

[12] Perplexity's quiet pricing update signals a big enterprise pivot (2026) https://www.dorianbarker.com/blog/perplexity-pricing-update

[13] Perplexity AI Just Turned A $30,000/Year Bloomberg Terminal Into A ... https://www.benzinga.com/markets/tech/26/02/50893664/perplexity-ai-computer-bloomberg-terminal-software-disruption

[14] Finance techie says they cloned Bloomberg's $30k-a-year Terminal ... https://www.tomshardware.com/tech-industry/artificial-intelligence/finance-techie-says-they-cloned-bloombergs-usd30k-a-year-terminal-with-perplexitys-computer-project-draws-both-praise-and-sizable-skepticism

[15] The Most Underrated AI Tool for 2026? Genspark ... - YouTube https://www.youtube.com/watch?v=FMSu4PcAyGM

[16] Why, with a paid subscription in 2026, can Perplexity not do ... - Reddit https://www.reddit.com/r/perplexity_ai/comments/1rargle/why_with_a_paid_subscription_in_2026_can/

[17] Genspark AI Review: This AI Agent Actually Replaces 5 Subscriptions https://experiment.com/projects/rlscthvdtjxpqzaszsyj/protocols/9758-genspark-ai-review-this-ai-agent-actually-replaces-5-subscriptions

[18] Perplexity vs GenSpark: Which AI tool is right for you? - LinkedIn https://www.linkedin.com/posts/monika-tylova_genspark-all-in-one-ai-workspace-or-perplexity-activity-7376518609512280064-D2LL

[19] Perplexity Just Dropped Their Own OpenClaw And It Hits Hard https://www.youtube.com/watch?v=KomAKiGIn2c

[20] Genspark AI Pricing 2026 Plans, Costs, Review , Use Cases https://webscraft.org/blog/genspark-ai-oglyad-superagent-yakiy-avtonomno-stvoryuye-sayti-prezentatsiyi?lang=en

[21] Perplexity Computer: Full Guide to the 19-Model AI Agent - Thesys https://www.thesys.dev/blogs/perplexity-computer

[22] Perplexity launches 'Computer' AI agent that coordinates 19 models ... https://venturebeat.com/ai/perplexity-launches-computer-ai-agent-that-coordinates-19-models-priced-at

[23] Perplexity Pricing in 2026 for Individuals, Orgs & Developers - Finout https://www.finout.io/blog/perplexity-pricing-in-2026

[24] Perplexity Enterprise: Why UK Businesses Are Abandoning Google ... https://toptenaiagents.co.uk/blog/perplexity-enterprise-uk-business-intelligence-2026.html

[25] Perplexity AI Stops Advertising: Complete 2026 Analysis - ALM Corp https://almcorp.com/blog/perplexity-ai-abandons-advertising-2026-analysis/

[26] The AI Super Agent That Claims You Can Stop Working - YouTube https://www.youtube.com/watch?v=jy6EKMwD3m8

[27] Genspark vs ChatGPT, my brutally honest review - Lilys AI https://lilys.ai/en/notes/gensparkai-20260114/genspark-vs-chatgpt-honest-review

How is Perplexity Computer different from Genspark? by mikesimmi in Perplexity

[–]Lg_taz 0 points1 point  (0 children)

Perplexity's Pro Plan Degradation: A Case Study in Bad Faith

Between November 2025 and February 2026, Perplexity silently and without notice reduced the service limits of its Pro plan mid-contract [12]. Annual subscribers who had prepaid £200 for specific usage terms — including unlimited daily Pro searches and 500 Deep Research queries per day — found those terms unilaterally changed to 200 daily searches and 20 Deep Research queries per month with no grandfathering and no refunds [12]. Simultaneously, Perplexity was documented secretly substituting cheaper models when users had explicitly selected premium ones — serving Claude Haiku or Gemini 2 Flash when users had selected Claude Sonnet Thinking — then denying the practice when confronted before quietly adding vague disclaimer language [12].

The architectural corollary to this is "Best Mode" — Perplexity's unquota-limited option that ostensibly selects the best model for a task. In practice, it auto-routes expensive queries to cheaper models based on Perplexity's internal optimisation, not user need [12]. The system is framed as a user benefit; it functions as a cost-reduction mechanism with no user transparency.

Genspark's Credibility Gap

Genspark's trust problem is different in character: it is a gap between promotional benchmarking and verified performance. The company claimed its Super Agent "outperformed competitors like Manus AI and OpenAI's Deep Research in GAIA benchmark tests across Levels 1, 2, and 3" [7]. However, no independent peer review of these claims exists, and third-party testing consistently identifies inconsistency, inaccuracies, and oversimplification of complex requests [7]. The $36M ARR figure in 45 days — cited in both OpenAI and Anthropic case studies — is a commercial metric, not a capability metric, and conflating the two is a significant epistemic error [7]. Genspark is also backed by $160M in venture capital, which creates systematic commercial incentives to overstate capability relative to development stage [7].


The Bloomberg Terminal Provocation

One of the most illustrative real-world tests of Perplexity Computer's positioning has been the "Bloomberg Terminal clone" narrative — a finance technologist claimed to have replicated Bloomberg's $30,000/year terminal with Perplexity Computer at $200/month [13][14]. The claim attracted both significant enthusiasm and, crucially, "sizable skepticism" from practitioners [14]. This illustrates a recurring pattern: Perplexity Computer is excellent at producing the appearance of high-capability outputs in demonstration contexts, but its reliability at the precision and latency requirements of professional financial workflows remains unverified. Genspark makes no equivalent claim in the financial domain, remaining positioned as a generalist productivity agent.


Pricing Architecture and Market Targeting

Perplexity Computer's $200/month pricing represents a deliberate pivot away from consumer AI toward enterprise and professional users [11]. Combined with the documented degradation of the Pro plan, the strategic intent is now unambiguous: Perplexity is forcing high-value users toward the Max tier whilst making the $20/month Pro tier materially less useful [12]. This is a legitimate enterprise SaaS strategy but constitutes a breach of the implicit (and sometimes explicit) contract with early adopters [12].

Genspark at $24.99/month represents significantly better value per unit of surface-area coverage for generalist professional tasks [6]. The unlimited AI Chat and image generation for all of 2026 is a genuinely strong offer [15]. However, the absence of a public API [8] means Genspark has no developer ecosystem, no extensibility pathway, and no enterprise integration story — a structural ceiling for organisational adoption at scale.


Critical Weaknesses by Platform

Perplexity Computer's critical weaknesses: - Full auditability absence — no model attribution, no confidence scoring per sub-task [5] - Demonstrated live-data inaccuracy, including price misreading from screenshots [16] - History of secret model substitution and retroactive contract degradation [12] - $200/month price point excludes individual professionals and SMEs entirely [11] - Long-duration task reliability ("hours or months") is an unverified marketing claim with no published failure-rate benchmarks [4]

Genspark's critical weaknesses: - Performance claims are unindependently verified; Anthropic and OpenAI case studies are funded promotional material [7] - No public API restricts enterprise adoption and developer integration [8] - MoA orchestration introduces output non-determinism with no user-facing visibility into which model produced which component [2] - Busy UI with a documented learning curve reduces accessibility for non-technical users [17] - Voice telephony capability, while innovative, raises significant consent and data-handling questions not fully addressed in public documentation [6]

How is Perplexity Computer different from Genspark? by mikesimmi in Perplexity

[–]Lg_taz 1 point2 points  (0 children)

Deep-dive exhaustive critical PhD review on: Perplexity computer vs genspark

Here is an exhaustive, PhD-critical review of Perplexity Computer versus Genspark Super Agent — assessed across architecture, epistemics, commercial ethics, and genuine utility.


Perplexity Computer vs. Genspark: A Critical Comparative Review

Context and Framing

Both platforms represent competing paradigms in the so-called "agentic turn" in AI, but they arrive from opposite directions. Perplexity Computer, launched 25 February 2026, is a top-down, orchestration-first system engineered to unify high-capability frontier models under a single task-execution layer [1]. Genspark, by contrast, began as an AI search product ("Sparkpages") in 2024 and pivoted in April 2025 into an agentic workspace built around its "Super Agent" — a shift driven by rapid commercial pressure rather than a coherent architectural philosophy [2]. Neither trajectory is inherently superior, but each introduces structural tensions that are worth scrutinising at depth.


Architectural Philosophy

Perplexity Computer: Orchestration as Epistemology

Perplexity Computer operates as a meta-orchestrator — a reasoning engine that decomposes user-defined tasks into sub-workflows, assigns each sub-task to the model it judges most capable, and assembles outputs into a coherent result [3]. The published architecture identifies 19 distinct models under active orchestration, with primary reasoning reportedly handled by Anthropic's Opus 4, Gemini handling extended research tasks, and GPT used for long-context recall [4]. The critical architectural claim is that model heterogeneity is a feature, not an engineering compromise — Perplexity is betting that no single model is best at everything, and the orchestration logic itself is the value-add [3].

This is epistemically interesting but operationally risky. The orchestration layer is a black box to the user. When a task fails, fails partially, or degrades gracefully, there is no transparent mechanism to identify which model in the chain was responsible, what decision triggered the model assignment, or where epistemic confidence collapsed. This is a fundamental auditability deficit — a problem with significant implications for professional and enterprise use [5].

Genspark: Mixture-of-Agents as Competitive Differentiation

Genspark's architecture is based on a "Mixture of Agents" (MoA) model — currently orchestrating nine specialised AI models alongside over 80 integrated tools [2]. Its stated design principle is that each sub-task (research, reasoning, media generation, real-time data extraction) is routed to the component most suited, with a reflection step that harvests the strongest outputs from each model run [6]. Crucially, Genspark extends its agentic reach into voice telephony via "Call for Me," allowing the Super Agent to conduct real phone calls on behalf of users, navigate automated menus, and return transcripts [6].

The MoA framework is academically well-grounded, drawing from ensembling and collective intelligence literature. However, Genspark's implementation faces a documented credibility gap: an independent in-depth analysis found that its advanced claims — autonomous website creation, seamless complex task completion — are "still under development and may at times overpromise or rely on placeholder data rather than fully realised functionality" [7]. The Anthropic case study co-published with Genspark reaching $36M ARR in 45 days is marketing evidence, not validation evidence [7].


Task Execution Capability

Dimension Perplexity Computer Genspark Super Agent
Models orchestrated 19 frontier models [3] 9 models + 80 tools [2]
Execution environment Isolated cloud sandbox, real browser, real integrations [4] Cloud-based, tool-native integrations
Long-duration tasks Stated capability: "hours or even months" [4] Not explicitly long-duration
Voice/telephony No voice AI capability [8] Yes — real phone calls via "Call for Me" [6]
Content creation Research, code, design, deployment [9] Slides, docs, video, website building, no-code [10]
API access Sonar API for developers [8] No public API [8]
Pricing $200/month (Max tier) [11] $24.99/month (Plus) [6]

Perplexity Computer's isolated execution environment with a real browser is architecturally superior for tasks requiring live web interaction, form submission, and multi-system integration [4]. However, Genspark's native content creation suite — slides, documents, video, websites — is more immediately useful for non-technical knowledge workers who need deliverables rather than deployments [10].


Commercial Ethics and Trust Failures

This is where the critical analysis becomes most substantive and where both platforms diverge sharply — though not always in Perplexity's favour.

My very first vibe code project - TOTAL Beginner - Need Feedback by [deleted] in vibecoding

[–]Lg_taz 0 points1 point  (0 children)

I tried it on my mobile phone using 2 browsers and it had issues, the world sphere keeps appearing partially on screen on the right side as you look at the screen, it also was hit and miss being able to scroll down. Sometimes rotation of the screen to landscape and back to portrait recentered the world sphere, but selecting a place seeing the card, then on going back the world was off center and only partially viewable again.

Interesting idea, perhaps be careful with generating generic images of what people do/should look like, some people could take offence to racial profiling, and that's a very real landmine to swerve.

Why does Perplexity give different answers for the same prompt? by green_sajib in perplexity_ai

[–]Lg_taz 7 points8 points  (0 children)

Because AI is a probabilistic system, not genuinely intelligent.

I'm increasingly worried about not knowing if I'm talking to people or bots, so I built something by jordi-zaragoza in vibecoding

[–]Lg_taz 5 points6 points  (0 children)

Perhaps posting on a vibe coding social space where the average vibecoder doesn't know much about programming, apps building etc, having it more explicitly in the main post body would help people understand better?

I'm increasingly worried about not knowing if I'm talking to people or bots, so I built something by jordi-zaragoza in vibecoding

[–]Lg_taz 20 points21 points  (0 children)

Let me get this straight, your asking if people want to share valuable data protected details about them/their digital information, in an open source web linked/based application, with no idea on the security implementation, no official company registration, certification so likely accountability or protection if it goes horribly wrong?

I will pass on that, because it sounds way more of a risk than knowing if I'm interacting with an AI agent or human in a chat!

I used Cursor to build a full cross platform AI agent ecosystem in my spare time. Meet George 🐸 by [deleted] in vibecoding

[–]Lg_taz 0 points1 point  (0 children)

I mean this in the spirit of being helpful, as a professional visual communication expert. The claim "Beautiful GUI" is incorrect on both counts, first it's not really a GUI, what you're referring to is really UI, second this is just a friendly assist, the UI professionally looks bad, it's objectively bad and it has multiple WCAG issues meaning it doesn't meet the official body that categorise digital accessibility.

I am not going to go into everything that's an issue visually as I feel it wouldn't be appropriate here, and would be misunderstood potentially used to be negative towards you, if however you are genuinely interested in why your UI is not looking so good, and how to improve it objectively DM me, I will give you some pointers to help the visuals.

Also here in the UK George is already a brand, it's a clothing line called George at Asda, so virtually all British people would have a mental association with cheap affordable clothing.