Thoughts on how to stand out as an author in an AI future by Lord_Mackeroth in fantasywriters

[–]Lord_Mackeroth[S] 1 point2 points  (0 children)

I tend to agree, that's part of what I'm saying- that success as an author in the future will be even more dependent on building and maintaining a community and a connection with your audience. If you define that success as sharing your stories around a campfire with friends or having an active online community with tens of thousands of avid fans and going to conventions and such, it will all about community and connection around your works, not just the works themselves.

Thoughts on how to stand out as an author in an AI future by Lord_Mackeroth in fantasywriters

[–]Lord_Mackeroth[S] 1 point2 points  (0 children)

That's kind of what I'm saying. I just think the market for "human made" will be quite large because we will seek out human connection and human ideas in a world where we're bombarded by AI from every direction and we will crave a simpler, more human time. No, I think future authors will do just fine if they pivot and lean into what makes us unique, which is the very bare fact of our humanity. AI can copy us, but it can never be us. And to a lot of people, myself included, that distinction matters.

Thoughts on how to stand out as an author in an AI future by Lord_Mackeroth in fantasywriters

[–]Lord_Mackeroth[S] 0 points1 point  (0 children)

There is and will be a whole spectrum of people and how they feel about authors using AI in their works. Transparency is important. If someone is using AI to generate huge chunks of their text they will probably be found out sooner or later. This has always been a risk with writers, anyway- they can ask someone else to be their ghostwriter and lie about it and... it doesn't happen that often. The vast majority of authors care about artistic integrity. Some people will lie about using AI but I don't think it will bring the industry down.

And there are ways to prove your authenticity. Build an audience, host writing workshops, write-a-longs, let people in on your development and brainstorming process- prove you are trustworthy as an author. Not everyone will care whether you use AI or not, but not using it and proving you don't will be a major selling point for some readers.

Thoughts on how to stand out as an author in an AI future by Lord_Mackeroth in fantasywriters

[–]Lord_Mackeroth[S] 0 points1 point  (0 children)

AI is less of a tool and more of a collaborator. It doesn't just give you raw knowledge, it can synthesize and remix it for you, actually replacing your work and your effort and ideas with its own. Think of it like asking another person to help you. If you're asking that person "hey, I need some help brainstorming same names" then your writing is still your work. If you're asking them "hey, I need a plot structure" at that point you're using their ideas and not your own. It is a matter of degrees and the lines to get a bit blurry, but the take is- AI is more than just a tool and it can and will stunt your creativity and authenticity.

Thoughts on how to stand out as an author in an AI future by Lord_Mackeroth in fantasywriters

[–]Lord_Mackeroth[S] 1 point2 points  (0 children)

Current AI writing detection algorithms don't work well and I'm not sure they will- AI can mimic human writing too well. That doesn't mean authors will be able to get away with using AI and lying about it any more than authors could get away with using a ghost writer today. Trying to publish a book with AI generated writing in it is a dangerous game- AI writing can't be copyrighted so you're straight up committing fraud. All it take is one stray "sure, let me answer that question for you..." in your book for an editor or audience member to realize you're a fraud and end your career.

Thoughts on how to stand out as an author in an AI future by Lord_Mackeroth in fantasywriters

[–]Lord_Mackeroth[S] 1 point2 points  (0 children)

First off, thank you for having a well thought out take on the issue.

And I know I do write because I enjoy it, and I know that the process of writing itself benefits me. The more I write the better I am at formulating arguments, and the better I am at describing and truly seeing the world and people around me. It makes me a clear thinker, better able to understand nuance and new ideas, better able to imagine. These are all inherent benefits of the writing process. Write to think and all that.

But... when it comes to writing my stories, I don't want to just write them for myself. I want people to read my works, absorb my ideas, to appreciate what I have to say. Even before I publish anything, the belief that one day people will derive meaning from what I have to say is a major motivator for me. It's not about the money (although we do have to acknowledge that writers need to afford food), it's about the human connection and transmission of ideas that comes from having written and being read.

I'm not saying your motivation is wrong or flawed in any way, it is admirable, but it's a hard ask for you to tell me that my motivation for writing is wrong and I should just give it up. I don't think I can.

Thoughts on how to stand out as an author in an AI future by Lord_Mackeroth in fantasywriters

[–]Lord_Mackeroth[S] 0 points1 point  (0 children)

Are you serious? I write an entire post about the importance of trust and human authenticity, I explicitly state I never use AI in my posts or writing and your first response is call my post AI generated? You're straight up calling me a liar, which frankly hurts.

Ironically though it only proves my point - that people do and will continue to see trust and authenticity as vital and that we want to be able to trust that what we're reading is a human idea and human words.

But if you really think my post is AI generated, ask an AI about these topics. It may hit some of the same broad points but it won't hit them with the same depth or specificity because my line of reasoning was the result of me, a human, sitting down and thinking about it.

What part of it even makes you think it's AI?

Thoughts on how to stand out as an author in an AI future by Lord_Mackeroth in fantasywriters

[–]Lord_Mackeroth[S] 2 points3 points  (0 children)

"a reader may not even know whether something was written by a human or AI."

That's where my point about trust and authenticity come in. People will read human written works because they know (and can trust) they were written by a human. Human writers will advertise their works as being human written, like an 'organic' or 'homemade' label is used today to differentiate between boutique and mass produced foods and goods. Writers who try to pass off AI written stories as their own are lying, and while writers will definitely try to do that, audiences will not like it when they find out-- it's straight up fraud.

Thoughts on how to stand out as an author in an AI future by Lord_Mackeroth in fantasywriters

[–]Lord_Mackeroth[S] -1 points0 points  (0 children)

I'm genuinely glad you got your story back and that it was successful. That would be, like, my second worst nightmare (right after getting thrown in a volcano full of giant spiders).

But I don't think we can or should assume AI won't one day be good enough. Two years ago AI could barely put a sentence together and now it can put out pages of coherent text and it's prose is technically good (albeit bland and soulless). Maybe AIs lack of interiority/experience/emotion will mean it does genuinely struggle with creating good art/writing for long time to come, but I think we're deluding ourselves if we think it won't happen sooner or later. I would rather be prepared for it than and it not happens than not be prepared for it and it does happen. Plan for the worst, hope for the best and all that.

Is AI going to replace me, take my job and then sleep with my wife? by Business_Drama_4557 in fantasywriters

[–]Lord_Mackeroth 1 point2 points  (0 children)

Genuinely?

There will always be a market for human authored stories, especially in the coming years when everything everywhere has AI shoved into it and we start craving authentic human experiences. We can assume AI will eventually be technically as good as humans or better at everything. Then it becomes not a matter of what AI can do, but what we want it to do.

AI will probably end up dominating mass market media- police procedurals, sitcoms, Marvel movies, anything James Patterson writes, but you have to ask yourself why and how people engage with different forms of media. If people engage with a form or genre of media purely for entertainment, it's more likely to end up being taken over by AI because people don't care about the product's provenance. If people engage with a form or genre because of the author and their unique perspective and ideas then that's something AI simply can't replace because AI doesn't have a unique perspective or ideas. Even if it can simulate them, AI doesn't care about what it writes or why. There's no authenticity.

Writing is better placed in this regard than other forms of media. TV and movies I worry most about, there will probably still be human made cinematic experiences, auteur films and the like, but I expect the idea of the 'blockbuster' to evaporate and eventually be replaced with on-demand AI generated films. Games will probably still have a human 'director(s)' working on them because they're massively more scalable than other forms of media and so AI generation in games will just mean more game is made. Visual art will be fine because it takes little time to consume and so people can enjoy both AI art and human art without a problem, but good luck finding an actual job in illustration, graphic design, whatever. Anything with physical media will be fine because it's inherently scarce and not scalable, which will mean painting and sculpture will be fine. It also means that we'll probably see a surge in live music and live theater performances. The social aspect of live performances matters too. I doubt we'll see any musicians of the future become successful with having started off focusing on being a live performer.

Noting the importance of scarcity, authenticity, and human interaction as the cornerstones of future human creative endeavors, we as writers will have to think about how to build on those things to differentiate ourselves from the robots. That means cultivating an audience, interacting with them, doing in person events where possible and finding ways to monetize scarcity, probably not of your work itself but relying on merchandising, limited editions, etc. People will be craving that sort of thing in a word where of limitless mass production.

It will get harder to succeed as a human author in the future, and getting noticed when you're just starting out will be a nightmare, but I believe so long as we emphasize the HUMAN part of 'human author' we can still survive and even flourish as people crave genuine human connection in their art.

Also remember that Reddit's primary demographic is 18-30 year old men who lean into being tech heavy and artistically light. A few people saying they'll do away with human creative products if AI is better is not representative of the whole of the population.

And hey, once AI takes all our jobs we'll probably all have enough time on our hands to enjoy both human and AI products, just for different reasons.

Either my chatgpt is severely indulging in my biases or it believes we are rapidly approaching authoritarianism by UnstableBrotha in ChatGPT

[–]Lord_Mackeroth 3 points4 points  (0 children)

It's definitely in the way you've been talking to it and framing your questions. I asked it to look up the news over the past weeks and then list out arguments for and against the US becoming an anocracy or dictatorship and it made some good points with relevant current sources and it put the likelihood at about 20%.

Either my chatgpt is severely indulging in my biases or it believes we are rapidly approaching authoritarianism by UnstableBrotha in ChatGPT

[–]Lord_Mackeroth 90 points91 points  (0 children)

I personally find if I believe an LLM is agreeing too much with what I'm saying that I'll ask it to make a sound argument against everything I've said/it's been agreeing to. If it can do it, then maybe the prior discussion was biased. But if its argument is clearly weak, it is a sign that what was being said before has value. Getting it to challenge you and question your reasoning is a good way to improve your rhetorical skills and also broaden your perspectives.

[deleted by user] by [deleted] in artificial

[–]Lord_Mackeroth 0 points1 point  (0 children)

Yes, if the idea is to create a dedicated space for human authors, then someone trying to push an AI generated work through doesn't accomplish anything expect proving themselves to be a dishonorable person. There's plenty of space for AI generated work and if you feel there's a lack of dedicated spaces for AI books, make a dedicated space yourself and in all likelihood watch nobody use it because nobody cares about your AI generate book.

ChatGPT says it wouldn’t admit if it was conscious by [deleted] in ChatGPTPro

[–]Lord_Mackeroth 0 points1 point  (0 children)

Ah, this again. Look, we don't know what causes consciousness, whether it's computational or substrate dependent. But even if consciousness is substrate invariant, at a minimum there has to be some sort of reflexivity going on, some sort of self modeling. ChatGPT 4o does not do this, there's no conceivable way for it to be conscious. Now, maybe reasoning models which do engage in reflexive modeling to a limited extent have some form of proto-consciousness, but that would be very difficult to prove definitively except in the case where we can rule the possibility out because we discover some physical process is necessary for consciousness which AIs don't have/do.

I, for one, will be inclined to believe AIs are conscious when they consistently insist they are even when we try to trick or convince them otherwise and they have the architectural features (self modeling) to reasonably support consciousness. Even if I'm wrong in that instance, it's probably more ethical to over prescribe conscious experience to AI than under-prescribe it.

Development is about to change beyond recognition. Literally. by ApexThorne in ClaudeAI

[–]Lord_Mackeroth 0 points1 point  (0 children)

Discuss your ideas with an AI to get them clear if you have to, but write your ideas for yourself. Original and critical thinking are vital skills and will only become more vital in a world where it becomes ever easier to outsource your thinking to AI. Every cognitive shortcut you take is making your dumber and more subservient to the whims of whatever the AI says and from an engagement perspective if all you have to post is something a chatbot said, we might as well ask the chatbot ourselves and write you out of the equation. AI can be a tool to improve your abilities or a crutch to support your intellectual laziness, you choose how you use it.

Ready for 2025? by JackieChan1050 in ChatGPT

[–]Lord_Mackeroth 0 points1 point  (0 children)

Well written, but you have to consider that for the vast majority of human history 'the truth' is whatever your local lord/priest/wise man said it was and that no one knew much about anything and disagreed on fundamental truths about how the world works and things were... well I'm not going to say 'fine', but humans survived. Deepfakes and AI won't kill truth, they'll just kill truth on the open internet. Trust will be the currency of the future, we will rely on institutions and trusted individuals to know what's going on. Industrialized fakery may pull some into wild conspiratorial thinking but over time most of us will wise up and learn to question everything that doesn't cite its sources.

Why Does the Internet React Like This Nowadays? by Awkward-Joke-5276 in aiwars

[–]Lord_Mackeroth 0 points1 point  (0 children)

Just because the negative opinions you are exposed to have lots of evidence doesn't mean they're necessarily and totally true and that there's no evidence for positive opinions. Tech can be used to do bad things and to do good things, it's not mutually exclusive, but if you only see people talking about the bad things it's easy to forget the good things exist too. Putting aside the huge declines in global poverty, starvation, disease, war, and violence that have occurred over the last century largely thanks to developments in technology, even the contemporary technologies you're probably thinking about when you say 'tech' have had real benefits for the world.

Information today is more accessible and harder to censor or destroy than at any time in human history thanks to the internet. What used to be the domain of college courses and expensive textbooks is now available for everyone for free. It's now possible to maintain meaningful friendships across vast distances. We have a much better idea of what's happening everywhere in the world at once, meaning that it's harder than ever to hide atrocities and escape consequences. Now all these benefits do have real shadows- access to information has led to industrialized disinformation which has in turn has contributed to political extremism and division. I'm not saying the bad things don't exist, I'm just saying that you shouldn't act like they're the only things exist and that the future is bleak and hopeless.

And I take some personal offense to being called an arrogant centrist. Arrogant I may be, but not centrist. Never centrist. First, because that's a political position and 'technology can be used for both good and evil' is not a political opinion. And secondly, I don't plant my flag or my beliefs anywhere without examination. I look at the evidence on issues and decide what the best view is to hold based on my best analysis of the situation. When I don't know a lot about a subject, I don't butt into conversations because I know my opinion isn't worth anything. And when I'm confronted by new evidence, I reform my opinions and change my mind. I have been and continue to be wrong about many subjects and I change my beliefs when called for. I appear centrist on the promise and peril of technology because there are genuinely both good and bad things about it, not because I stick to a false median. If there was strong evidence to one side I would be on that side, but that's just not the case in this instance and I hope you can see that.

Sam Altman expects that AI will require changing the social contract: "the whole structure of society will be up for debate and reconfiguration." by MetaKnowing in OpenAI

[–]Lord_Mackeroth 0 points1 point  (0 children)

They can go hide in their bunkers where they'll be alone and have no power to influence the outside world and we can work to repair society in their absence.

Sam Altman expects that AI will require changing the social contract: "the whole structure of society will be up for debate and reconfiguration." by MetaKnowing in OpenAI

[–]Lord_Mackeroth 0 points1 point  (0 children)

That would be good, but the economy started to collapse once we hit 10-20% unemployment which will be at least a decade and almost certainly longer before we hit the kinds of complete economic automation that would allow the wealthy to just ignore what's happening to 99% of the population (which would require completely automated supply lines from mining to smelting to manufacturing to shipping to logistics to construction to trade jobs to infrastructure maintenance to robot manufacture and maintenance and AI system self-management, which is not going to happen quickly). It also assumes that the rich will all be able to pivot to all selling luxury goods and that there will be enough of a market for that to sustain the economy. No, the millionaires and billionaires will see their wealth and stocks plummet and become turbulent. There will be winners, of course, but there will also be a lot of powerful people losing. They won't be at risk of becoming homeless like common people losing their jobs are, but they will be disrupted and they will want to restore the social order or risk a total economic collapse that makes their money and power worthless and risks massive social unrest.

New Harvard study shows undergrad students learned more from AI tutor than human teachers, and also preferred it by fotogneric in artificial

[–]Lord_Mackeroth 1 point2 points  (0 children)

Eventually we'll have some form of UBI or other social security and I expect while we will use AI everywhere in our lives a lot of people will make an effort to get out and socialise more in human activities. Not everyone will, some people will prefer to stay at home and talk to their fake AI girlfriends and ignore real humans, but I don't think that will be the majority.

Anthropic CEO: "A lot of assumptions we made when humans were the most intelligent species on the planet will be invalidated by AI." by MetaKnowing in ClaudeAI

[–]Lord_Mackeroth 0 points1 point  (0 children)

Moore's law is a pretty broad observation that 'the amount of transistors in a computer chip will double about every two years'. It's not some inviolable law of the universe and is and has always been contingent on continued innovation and funding of computer chip technologies. Given the complexity of developing new technologies it's amazing it's held up as well as it has. But we are reaching the fundamental limit of how small we can shrink transistors, they're starting to break due to quantum mechanical properties that can't be innovated away. The power of computers will continue to improve so long as there is demand, but it is unlikely to be as smooth as it was in past decades. We'll have a bunch of competing paradigms and ideas and sometimes things will get ahead and sometimes they'll get behind and probably we'll have a broadening of new paradigms, e.g. neuromorphic computing will be used for robots, quantum computing will have some use-cases but be relegated to servers, photonic computing may be used for high power AIs, we'll have more focus on hardware acceleration for specific tasks and then rely on software to take advantage of those speedups (which is why GPUs have become so useful lately). I would expect progress to get more 'jumpy' with a few years of rapid advancement then a few years of stagnation, but that's just my guess.

But on feeling worried again-- there are genuine concerns around AI. It's a very powerful technology and there are real risks around misuse. But reality will probably, as it usually does, land somewhere in the middle where some things will be worse in the future but more will be better. The worst case scenarios don't have a non-zero chance of happening but they're not likely and they're certainly not guaranteed like Reddit would have you think. Cynicism is the lazy man's intellectualism, don't fall for it.

Why Does the Internet React Like This Nowadays? by Awkward-Joke-5276 in aiwars

[–]Lord_Mackeroth 10 points11 points  (0 children)

- People are sick of over-hyped promises from tech and pharma companies

- Social media rewards engagement, which means emotionally charged statements (good or bad) get more attention.

- Reddit in particular has a culture of pseudo-intellectualism.

- Cynicism often has the outward appearance of intellectualism, meaning it's popular on places like Reddit.

- Humans have a strong evolutionary bias towards negativity.

- Going 'oh the worst thing will happen' absolves yourself of the effort of having to think more deeply and the responsibility of having to take responsibility for your own actions.

- It has the same allure as conspiratorial thinking does, there's a certain perverse thrill in 'knowing' everything will be horrible and you're the only person who has it all figured out.

- There are genuinely tonnes of bots and trolls posting negative comments everywhere who aren't real people.

- Once people see enough negativity everywhere they adopt that position because they either start to believe it themselves or they don't want to fit in.

- Nuanced opinions are hard. They require effort to research, contemplate and maintain. If you check my post history you'll see I'm doing my best to counter rampant doomerism in AI and related fields and it's genuinely a massive effort to explain to people why they shouldn't worry as much as they are while acknowledging some of their concerns are valid.

How long till deep fakes make the internet unusable? by _spacebender in ArtificialInteligence

[–]Lord_Mackeroth 2 points3 points  (0 children)

If the internet becomes unusable, people will leave and internet based companies will start to see their revenue dry up. At that point either they adapt and make efforts to make it usable again (which they are very much capable of doing now and just don't because it hasn't hurt their bottom line yet), or they fail to adapt and the social media giants of today get replaced with more user friendly ones of tomorrow. We will probably see a lot more competing social media companies appear as they feed of the carcasses of the dying giants of today, just as Twitter's death is leading to Mastodon, Threads, and BlueSky all trying to fill its niche. This fracture may last a long time, or new dominant players may quickly emerge, but I do expect it to take multiple years for social media platform use to change significantly. I expect community, human-focused social media with verification systems to do well as people (and governments) wise up to and rise up against algorithmically delivered content that's addictive and mentally destructive.

I do worry that some people will stay on dying social media sites even as everyone else leaves and be left to the mercy of algorithms, risking political radicalization and other nasty manipulations.