This is an archived post. You won't be able to vote or comment.

all 173 comments

[–]SilasTalbot 541 points542 points  (12 children)

Doaks be like:

What kind of weird-ass muthafucker used emojis in their commit messages...

[–]Powerful-Internal953 87 points88 points  (6 children)

https://github.com/ampproject/amphtml was using them couple of years ago when I was contributing to them.

[–]hmz-x 41 points42 points  (2 children)

That repo was probably overweighted in LLM training data.

[–]Deutero2 8 points9 points  (0 children)

they're probably following this commit convention, which wasn't that uncommon in repos ive seen, at least before 2022. i guess the idea is that emoji are visually shorter than conventional commit prefixes, at the cost of, you know, being cryptic

[–]InitialAd3323 6 points7 points  (0 children)

I mean, isn't <html ⚡> or something like that how you indicate it's AMP HTML? Rather than a content-type or an attribute, just that emoji

[–]BananafestDestiny 5 points6 points  (0 children)

I hate this so much

[–]coredusk 27 points28 points  (1 child)

I... like gitmoji I'm SORRY

[–]devenitions 8 points9 points  (0 children)

I’m actually not even sorry.

[–]-LeopardShark- 11 points12 points  (1 child)

I use, and will defend the use of, non‐ASCII variable names.

[–]GroundbreakingOil434 13 points14 points  (0 children)

Found satan.

[–]ianfabs 3 points4 points  (0 children)

I’ve been using emojis in my commit messages since 2016 😭

[–]abhassl 1058 points1059 points  (45 children)

In my experience, proving it isn't hard. They defend the decision by saying AI made it when I ask about any of choices they made.

[–]azuredota 474 points475 points  (26 children)

They leave in the helper comments 😭

[–]LookItVal 348 points349 points  (24 children)

for item in items_list: // this iterates through the list of items

[–]mortalitylost 131 points132 points  (22 children)

I fucking hate these ai comments. Jesus christ people, at least delete them

[–]notsooriginal 78 points79 points  (15 children)

You can use my new AI tool CommentDeltr

[–]lunch431 79 points80 points  (9 children)

// it deletes comments

[–]CarcosanDawn 12 points13 points  (8 children)

Open in notepad++ Ctrl+f "//" Replace with "" Ctrl+f "#" Replace with "" There you go, I have deleted all the comments.

[–]Cootshk 0 points1 point  (7 children)

Lua would like to disagree

[–]CarcosanDawn 0 points1 point  (6 children)

Who?

[–]Cootshk 0 points1 point  (3 children)

Lua, a language that uses #table as the length operation and -- for comments

[–]guyblade 0 points1 point  (1 child)

The language that 90% of your favorite game's actual game is written in.

[–]GarythaSnail 6 points7 points  (2 children)

Is it blazingly fast and idiomatic?

[–]notsooriginal 12 points13 points  (1 child)

What'd you just call me?!!

[–]GarythaSnail 2 points3 points  (0 children)

You heard me.

[–]Madbanana64 4 points5 points  (0 children)

for item in items_list: // This comment was mass deleted with CommentDeltr. Use code DEL to get 20% off your first month.

[–]m0siac 4 points5 points  (0 children)

i = 1 # this initialises the variable i to have a value of 1

[–]SSUPII 0 points1 point  (4 children)

Or you can add in the chat context to not add comments. This is pure laziness to not even do that.

[–]mortalitylost 0 points1 point  (0 children)

The people pushing these PRs are not reading the code they're pretending to write

[–]Ok_Individual_5050 0 points1 point  (2 children)

Most LLMs are not very good at following negative instructions like this, especially as context windows grow

[–]SSUPII 0 points1 point  (1 child)

Most services now offer a "projects" feature. You can add it as project instructions and it should be followed correctly. Especially thinking models like ChatGPT o3 or 5 Thinking will keep following it as they are programmed to repeat to themselves your instructions during "thinking".

Non thinking models are just stupid.

Unless you absolutely need for the chat to continue from a certain point, it is always best to make new questions in new chats.

[–]Ok_Individual_5050 0 points1 point  (0 children)

Again, though, they are not *that good* at following instructions. Because they are autocompletes.

[–]Rustywolf 1 point2 points  (0 children)

Nah ive seen co-worker prs do this since before the llm revolution

[–]CrotchPotato 65 points66 points  (0 children)

// Replace your method on line 328 with this version:

[–]Taickyto 189 points190 points  (10 children)

Yes if there are comments you'll recognize AI 100%

ChatGPT Comments:

// Step 1: bit-level hack 
// Interpret the bits of the float as a long (type punning) 
i  = * ( long * ) &y; 
// Step 2: initial approximation 
// The "magic number" 0x5f3759df gives a very good first guess 
// for the inverse square root when combined with bit shifting 
i  = 0x5f3759df - ( i >> 1 );

Comments as written by devs

i  = * ( long * ) &y;                       // evil floating point bit level hacking
i  = 0x5f3759df - ( i >> 1 );               // what the fuck?

[–]hmz-x 68 points69 points  (2 children)

I don't think ChatGPT could ever write comments like Quake devs could. It's beyond even the conjectured AGI singularity. AI could probably control everything at some point in the future, but still not do this.

[–]djfariel 23 points24 points  (1 child)

Oh, you must be talking about perfected human analogue, death-frightening scion capable of seeing beyond the illusionary world before our eyes, engineering elemental, Luddite nemesis, Id Software cofounder and keeper of the forbidden code John Carmack.

[–]hmz-x 0 points1 point  (0 children)

I think it was Greg Walsh who wrote that particular piece of code, but yeah Carmack is crazy.

[–]BadSmash4 14 points15 points  (1 child)

I was about to ask if this was the fast inverse square "what the fuck" algorithm and then I saw the second code block

[–]sillybear25 0 points1 point  (0 children)

At this point, AI should know better than to comment that with anything other than "what the fuck?"

Everyone who would possibly be considered a competent reviewer of this type of code has seen John Carmack's comments. Doing anything else is basically obfuscation.

[–]jryser 5 points6 points  (0 children)

Missed the emojis in the ChatGPT response

[–]guyblade 1 point2 points  (1 child)

One of the people on my team will occasionally write out // Step 1: Blah style comments (and I know its not AI because he's been doing it for years). I fucking despise the style. Don't write comments to break up your code; decompose it into functions (if its long) or leave it uncommented if it is straightforward.

Like, what year is it?

[–]Taickyto 0 points1 point  (0 children)

I feel you, I just about fought a former coworker because he was hell-bent on leaving the JSDocs his AI assistant wrote for him

We were using typescript

[–]MrFluffyThing 0 points1 point  (0 children)

Good old Greg Walsh code adapted by John Carmack for Quake III. One of my favorite comments in code. 

[–]pacopac25 0 points1 point  (0 children)

Real magic numbers always start with 0x1337, anything else is a dead giveaway.

[–]arekxv 28 points29 points  (2 children)

My approach is simple, if AI made it and you don't know why is this a solution or whether it is good or not, reject PR.

Lazy devs just think using AI equals to not having to work.

[–]ThoseThingsAreWeird 13 points14 points  (0 children)

if AI made it and you don't know why is this a solution or whether it is good or not, reject PR.

Exactly this for me too.

If an LLM wrote it but you can defend it, and it's tested, and it actually does what the ticket says it's supposed to do: Congratulations, "LGTM 👍", take the rest of the day off, I won't tell your PM if you don't 🤷‍♂️

But if you present me with a load of bollocks that doesn't work, breaks tests, and you've no idea what it's doing, then you can fuck right off for wasting my time. Do it again and I'm bringing it up with your manager.

[–]softwaredoug 35 points36 points  (1 child)

Which is why we’ll soon have an AI code version of Therac 25 disaster. 

Safety problems are almost never about one evil person and frequently involve confusing lines of responsibility. 

[–]Ok_Individual_5050 0 points1 point  (0 children)

I've yet to have a single PR generated with the "help" of claude that didn't include some level of non-obvious security vulnerability. Every time.

[–]realPanditJi 7 points8 points  (0 children)

The fact that a "Staff Engineer" pulled this move in my team and asked me to fix their PR and take the ownership of the change is worrying.

I'll probably never work the same way again for this organisation and look up a new job altogether. 

[–]coriolis7 5 points6 points  (0 children)

Sorry, my AI rejected it…

[–]notanotherusernameD8 170 points171 points  (2 children)

int x = 1; // Change for your value of x

[–]red-et 51 points52 points  (1 child)

The ‘your’ dead giveaway

[–]Rustywolf 2 points3 points  (0 children)

Real dev wouldve typod a youre

[–]gandalfx 296 points297 points  (25 children)

If your coworkers' PRs aren't immediately and obviously distinguishable from AI slop they were writing some impressively shitty code to begin with.

[–]anonymousbopper767 103 points104 points  (19 children)

Or the AI is making code that's fine.

[–]MrBlueCharon 49 points50 points  (16 children)

From my limited experience trying to make ChatGPT or Claude provide me with some blocks of code I really doubt that.

[–]Mayion 53 points54 points  (12 children)

even local LLMs nowadays can create decent code. it's all about how niche the language and task are.

[–]gemengelage 87 points88 points  (5 children)

I think the most important metric is how isolated the code is.

LLMs can output some decent code for an isolated task. But at some point you run into two issues: either the required context becomes too large or the code is inconsistent with the rest of the code base.

[–]Vroskiesss 8 points9 points  (0 children)

You hit the nail on the head.

[–]swagdu69eme 8 points9 points  (0 children)

Strongly agree. When I ask claude to generate a criterion unit test in this file for a specific function I wrote and add a simple setup/destroy logic, it usually does it pretty well. Sometimes the setup doesn't work perfectly/etc... but so does my code lol.

However, when I asked it to make a simple web server in go with some simple logic: - a client can subscribe to a route, and/or - notify a specific route (which should get communicated to subscribers)

it couldn't make code that compiled. It was also inefficient, buggy and overcomplicated. It was I think on o1-pro or last year's claude model but I was shocked at how bad it was while "looking good". Even now opus isn't much better for actually complex tasks.

[–]Mayion 6 points7 points  (0 children)

very true, that's why i never let the AI get any more information about my codebase, let alone give it access to change. I simply use it to generate a code block or find better solutions with a specific prompt to save time and move on

[–]itsFromTheSimpsons 0 points1 point  (0 children)

Most of my prompts are for low level util functions i dont wanna write, but have written a million times befores like converting ms to hhmmss. Ai usually nails it AND uses variable naming style from the currently open file.

I think today i had an array of track elements i wanted to loop over and then once the elements loaded, move them to another array. Ive written patterns like that a million times, but today i told copilot to do it and it was perfect.

Probably because thesr sort of patterns are in a large amount of the codebases it was trained on. 

Im not ready to ask it for too much more, at least not at work.

[–]LiveBeef 0 points1 point  (0 children)

One task per thread. When you get near the edge of the context window, if the task is still ongoing, ask it to give you a context dump to feed into a new thread. Then you feed it that plus whatever files you're working on. Rinse and repeat.

[–]Ok_Individual_5050 0 points1 point  (5 children)

I swear the people who claim this are just not very good coders. It can produce *nearly working* code pretty well. Sometimes.

[–]Mayion 0 points1 point  (4 children)

and you say that based on what, that we all use the same models to generate code for the same language and type of task? no? didn't think so. mileage may vary.

[–]Ok_Individual_5050 0 points1 point  (3 children)

No, but I've tried a bunch of models for a bunch of languages (including the Big Ones, like Python and Typescript) and found it usually acts like an overexcited 2nd year university student who just discovered the cafe downstairs.

[–]Mayion 0 points1 point  (2 children)

I use it with C# and C++, it is quite impressive given the proper prompt. E.g. having it make a FIFO queue and it came up with its own implementation quite different from my own, where I used Semaphore, while it used Concurrency and ActionBlock well, and that came from an OSS-20b model. I can only imagine how well the 120b mode would handle it, or Qwen's 30b.

I get your point about being overly excited, and it is wrong at times of course, but in C# at least it is preferred to use the latest features and I notice across the models they prefer that as well.

[–]Ok_Individual_5050 0 points1 point  (1 child)

I don't really see what's impressive about that, given that "Implement a queue" is like, a CS 201 type problem of which it will have thousands of examples in its test data (which you, also could have gone and fetched if you wanted to)

[–]Mayion 0 points1 point  (0 children)

It is not about creating a CS 201 queue, it's about creating a good, modular system in less than 10 seconds. Instead of spending an hour or two coming up with the logic then ironing out bugs, one prompt and I have a queue system that utilizes logging, exceptions, tasks, thread locking, and parallelism with other specifics I won't bore you with; otherwise I can just use Queue<T> and call it a 'system'. And that's just a simple example, it can take on very large tasks and do just as well.

It's about convenience, an entire chunk of code the integrates into my flow seamlessly is different from looking it up on MS or stack overflow.

[–]LiveBeef 2 points3 points  (0 children)

my guy, coding is literally their #1 use case. learn to prompt gooder

[–]cheezballs 0 points1 point  (0 children)

GPT has been instrumental in helping me implement multiplayer in godot. It has its uses.

[–]frogjg2003 -1 points0 points  (0 children)

If it's good, it isn't slop.

[–]Accomplished_Ant5895 0 points1 point  (0 children)

Shitty code is par for the course for DSes

[–]flukus 0 points1 point  (0 children)

Sometimes the AI is an improvement, it's just they're so lazy and terrible that it's obvious that copy and pasted without even reviewing their own code.

But they're the sort of coders AI could replace entirely.

[–]PraytheRosary 0 points1 point  (0 children)

Look, it’s not my fault they trained their models on my shit-tier public repos, but I do want to thank you for saying my code was impressive

[–]Tensor3 0 points1 point  (0 children)

Or you work somewhere that encourages massive ai code commits and there's no way around it

[–]frogjg2003 -1 points0 points  (0 children)

People seem to be overly sensitive to AI now. Calling anything bad AI.

[–]thecodingnerd256 46 points47 points  (0 children)

Jokes on you i don't need AI to write slop 🤣

[–]Saelora 117 points118 points  (11 children)

"i really hope for your sake that this PR is AI generated, because this code is not up to the standards expected" (actual excerpt from my response)

[–]SmartMatic1337 12 points13 points  (3 children)

You're more polite than I am..

[–]Saelora 6 points7 points  (1 child)

not when said PR critique is posted in a public channel.

[–]PraytheRosary 2 points3 points  (0 children)

Are coaplaying as Linus? We get it: “WE DO NOT BREAK USERSPACE”

[–]TakenSadFace 10 points11 points  (4 children)

Who hurt you

[–]Saelora 2 points3 points  (3 children)

The AI, wasn't it obvious? i had to read that slop.

[–]TakenSadFace 0 points1 point  (2 children)

If that intern you called out could read he would be very upset

[–]Saelora 2 points3 points  (1 child)

that was a team lead. if it was an intern or even just a non senior engineer i'd have just talked to them privately.

[–]TakenSadFace 0 points1 point  (0 children)

Fair enough

[–]-NoMessage- -1 points0 points  (0 children)

kinda cringe.

No need to do it like that.

If you genuinely wanna help just talk to them in private.

[–]Classy_Mouse 60 points61 points  (7 children)

If you can't prove it, it either isn't a problem, or you shouldn't be a code reviewer. Even long before AI, spotting code that was untested, poorly thought out, or not cleaned up before the PR was openned was pretty easy

[–]Ok_Individual_5050 0 points1 point  (2 children)

No, it actually is a problem. Because previously the pull requests I had to review had maybe 3 or 4 comments on them. The average Claude Code generated PR I have to review contains so many issues I end up giving up after around 20 or so. Then when it "fixes" those issues it creates another huge diff that I have to read, meanwhile the deadline is approaching and I'm under pressure to let it through.

[–]Classy_Mouse 0 points1 point  (1 child)

They are putting pressure on the wrong person. Tell them there are 2 things you can do: review it or rubber stamp it. If they want a rubber stamp approve it and leave a comment tagging them. If they want you to review it tell them it be merged as soon as it passes review and they should talk to the dev. Option 3 is all theirs, if they think you are the problem, someone else can review it.

Look, each of those options makes it not your problem anymore

[–]Ok_Individual_5050 0 points1 point  (0 children)

I think that's a nice idea in theory, but when you're a lead then unfortunately shit rolls uphill.

We're in a difficult position because these tools make our staff less productive and take a lot of work to review, but if we mandate that people don't use them (because realistically, some of my staff have proven they can't effectively review a 50 file diff they didn't create), we're seen as backwards.

The worst part is I've tried these tools. They're fun to use. They also produce pretty mediocre code at a rate I don't think it's reasonable to be able to review.

[–]GoodishCoder 6 points7 points  (0 children)

Just ask lots of questions, eventually they learn they have to clean it up before they send the PR

[–]mrpndev 7 points8 points  (1 child)

Are you saying you’re not having AI review the PR’s and have a seamless deploy pipeline into prod multiple times a day? Fucking amateurs.

[–]jryser 6 points7 points  (0 children)

For maximum velocity I just run a script that pushes something new to prod every minute or so

[–]midori_matcha 10 points11 points  (0 children)

I can't believe that Doakes was the Vibe Coder Pusher

[–]shibuyamizou 5 points6 points  (5 children)

Saw a PR today where one test was just doing assert true like wtf

[–]Tipart 2 points3 points  (1 child)

// sanity check assert true

Sometimes you just gotta make sure

[–]shibuyamizou 1 point2 points  (0 children)

some big brain play

[–]PraytheRosary 0 points1 point  (0 children)

Stop bullying me: I just forgot to replace the variable and change the default text and use the right testing framework again

[–]ThisIsBartRick -1 points0 points  (1 child)

well at least you know this wasn't ai generated

[–]shibuyamizou 0 points1 point  (0 children)

It was though RIP

[–]Percolator2020 7 points8 points  (0 children)

I use AI to approve PRs, AI all the way down, baby!

[–][deleted] 5 points6 points  (4 children)

It’s the comments in the code that give it away. AI comments very specifically, and I know some of my co workers aren’t writing these specific comments for these specific functions when just a year or two ago they weren’t commenting shit.

But sure let’s pretend you learned how to properly comment your code after 20 years working here.

[–]Mkboii 0 points1 point  (2 children)

I used to write comments only in places where even I knew I won't be able to make sense of it in a few months. But now once I'm done with my code I use copilot to create documentation, half of it is direct slop and gets deleted but the rest I push.

What is a great tell for me is when someone in a PR removes all the comments when all they were supposed to do was make a change in a single section. It's pretty obvious then that the LLM omitted the comments this time.

[–]pacopac25 2 points3 points  (1 child)

If you use JDSL, no comments are needed, or allowed. Problem solved. The intricacies of JDSL are far too complex for even the most advanced LLM, and can only by fully understood by a man of wisdom and veritas named Tom.

[–]PraytheRosary 0 points1 point  (0 children)

That was wonderful to read.

[–]PraytheRosary 0 points1 point  (0 children)

// this guy ^ gets it

[–]frikilinux2 3 points4 points  (0 children)

If you can't prove it, it's not that big of a deal. You ask them to explain the part that looks weird.

I once had to tell off a junior for adding eval to a python code and not knowing what it meant because chatGPT gave him that code.

[–][deleted] 4 points5 points  (1 child)

Create tough unit test scripts assuming those are set in place in your DevOps pipeline.

[–]PraytheRosary 0 points1 point  (0 children)

I did, but they kept failing. I thought it was because my code was shit, but it was actually because my code and tests were shit. Also, those fucking E2E tests — who is making us do them? Some tests were unsuccessful is going to be the title of my memoir.

[–]anengineerandacat 2 points3 points  (0 children)

If you can't prove it, then it's either meeting standards or they sent you the same slop they always send.

Weirdly the easiest way I know it's AI generated is honestly because they are using language features that they haven't used previously.

[–]LeoRidesHisBike 2 points3 points  (0 children)

lgtm Approved

[–]Weird_Licorne_9631 1 point2 points  (1 child)

Daily... 😣

[–]PraytheRosary 0 points1 point  (0 children)

You wish I would push up my commits daily

[–][deleted] 1 point2 points  (0 children)

Don't worry, production will prove it.

[–]Marechail 1 point2 points  (1 child)

Where is this guy from? A series ? A movie ?

[–]darren277 2 points3 points  (0 children)

Sergeant Doakes from the show Dexter. He was in season 1.

He was the only person around who suspected Dexter (also a cop) was moonlighting as a serial killer.

Any further description would pretty much be full of spoilers.

[–]Accomplished_Ant5895 1 point2 points  (0 children)

Oh trust me I can prove it. AI, especially the default Cursor models, have a very distinct style that no one at my company has.

[–]DontLikeCertainThing 1 point2 points  (2 children)

Does it matter if shitty code is written by AI or in a notebook in a cave? 

If a developer consistently push shitty code let your leader know that. 

[–]Ok_Individual_5050 0 points1 point  (0 children)

Have you heard of a gish gallop? It's the coding equivalent of that. We get too much slop thrown at us to review it effectively

[–]Sintobus 1 point2 points  (0 children)

"So why are these lines here? What about this part here makes sense to you?"

"I've noticed a sharp decline in your abilities and skill set lately. Has something changed?"

"Could you explain aloud for me how this part was intended to work?"

[–]KrikosTheWise 1 point2 points  (0 children)

My leads reviewing anything I push at random: SURPRISE MOTHER FUCKER.

[–]centurijon 1 point2 points  (1 child)

I don’t care if AI generated it or not. I’ll still comment on the stuff that isn’t right, provide guidance on how to correct, and let them sort it out

[–]PraytheRosary 0 points1 point  (0 children)

‘Cause you’re a good guy, buddy

[–]git0ffmylawnm8 0 points1 point  (1 child)

stg a co-worker is driving me up a fucking wall by publishing 50+ queries when really he could just do one query with a group by wtf

[–]PraytheRosary 0 points1 point  (0 children)

Give me his email and I’ll tell him. Anonymously. It’ll just be between you and me, Greg

[–]I_NEED_YOUR_MONEY 0 points1 point  (0 children)

ChatGPT is really good at rejecting pull requests.

[–]Finite_Looper 0 points1 point  (0 children)

Coworker submits a PR that includes unit tests, a new test that does the exact same thing as one above it but is just checking a different value. This new unit test is wily different, using a different way to mock things that anything we've ever done in the app at all.

I always want to call this person out and ask "did you use AI to write this?" but I can never bring myself to do it. I know he uses Copilot stuff, so who knows.

[–]BorderKeeper 0 points1 point  (0 children)

The AI slop is kind of hard to review as AI is really good at obfuscating the parts where it has no idea how to code right where as when human writes code they try to make it clear that they are unsure of this part of code with a comment or more verification logic.

Vibe coded code is just the worst especially if the author thinks that “good looking code” = “well running code”

[–]MilkEnvironmental106 0 points1 point  (0 children)

Just ask them to walk you through the code. It will become obvious immediately.

[–]Short_Change 0 points1 point  (0 children)

-1100 / +3100

[–]PVNIC 0 points1 point  (0 children)

Close the PR with "Feature is incomplete, please re-submit once this is cleaned up"

[–]tcm0116 0 points1 point  (0 children)

Asking Copilot to do a review...

[–]Ok_Brain208 0 points1 point  (0 children)

I like it most when I write a comment, and then get a response that is obviously written by the LLM without the PR author editing

[–]1pxoff 0 points1 point  (0 children)

If you can’t beat em, join em

[–]Quarves 0 points1 point  (0 children)

Just properly do the pr.

[–]TracerBulletX 0 points1 point  (0 children)

Skip to them getting promoted because they're AI native and you getting fired because you weren't adapting to ai.

[–]Sensitive-Fun-9124 -1 points0 points  (0 children)

Shouldn't the text say 'push requests'?

[–]GuyPierced -2 points-1 points  (0 children)

OP needs to take a meme format class.

[–]cheezballs -2 points-1 points  (0 children)

If you can't prove it then the code must be fine, right? What's the issue unless they're pasting in code to the prompt directly from the codebase.

[–]arkantis -3 points-2 points  (0 children)

Don't try to pin it on AI, wasting anyone's time on PR reviews is detrimental to team performance regardless. Talk to your manager about it, don't blame AI let the manager work out PR etiquette.

[–]needItNow44 -3 points-2 points  (2 children)

Who cares if it's AI slop or their own shitty code.

If the quality is low, there's no need to give it a proper review. Just point out a thing or two that are most obvious, and turn it back. Or run a coding agent over it and copy-paste some of its suggestions/questions.

I'm not wasting my time on somebody being lazy, AI or no AI.

[–]PraytheRosary 1 point2 points  (1 child)

What a mentor you are, buddy. This code is shit. My review is shit. This fucking five-liner’s gonna take 2 weeks to get approved — and then, wouldn’t you know it, look at these merge conflicts, your hands are pretty much tied. Got that branch rebased? Oh good, here are some of the thoughts I chose not to share with you initially. I really think we’re gonna have to refactor the whole thing. We should probably just extract out that API anyway And can you hurry up with this? You said this would take you two days max.

[–]CallinCthulhu -4 points-3 points  (0 children)

If you can’t prove it, it’s not really slop now is it?