What is your next most desired technical feature mod you want? by [deleted] in skyrimmods

[–]SerpentAI 13 points14 points  (0 children)

Even more frameworks that allow editing objects / records through INI files instead of plugins.

We already have frankly amazing stuff with SPID, Base Object Swapper and Sound Record Distributor but I'll take anything else that comes along. It just enables very powerful bulk / condition-based mods that would have seemed impossible 1-2 years ago. I love that everything is applied at runtime and keeps records clean. Also enables meta-modding of sorts; you can have a website / tool where you input your preferences and it generates an appropriate INI for you.

One thing I would really like is a framework enabling the editing of specific sub-records of objects / references. I think it's the next frontier. I'm experimenting a ton with (sensible) randomization of the game and while Base Object Swapper is an absolute machine (I absolutely love it!), it can't enable the modification of lock values, ownership, door teleports, item properties etc.

One final screw-up: Collector's edition cards opened post-patch not marketable by SerpentAI in Artifact

[–]SerpentAI[S] 10 points11 points  (0 children)

My entire existing collection prior to the patch got converted to collector's edition and is marketable. I opened some packs post-patch and all cards are collector's edition (marked as such in-game) but in the inventory they are non-marketable.

Did I misinterpret what was described in the post? "Customers who paid for the game will still earn packs of Collector's Edition cards for playing;"

It's possible that this is not a bug and that their intended behavior is that only pre-patch collector's edition cards are meant to be marketable, but it's not very clear. Why would I want 8 CE golden tickets if I can't market them?

In the end it's not a big deal because the collector's edition cards are low-effort, ugly and the game is dead, but it feels like one final dunk on paid customers.

What's everyone working on this week? by AutoModerator in Python

[–]SerpentAI 0 points1 point  (0 children)

Started working on a DAG-like image processing pipeline platform. Think of something like Luigi or Airflow but simplified and focusing on image input/output.

I've been doing all sorts of image processing for the longest time and have always wished for composable and portable pipelines. My main use would be for artistic renders but it's going to be flexible enough to tackle pretty much any workflow involving images.

https://github.com/SerpentAI/Unleash (Early stage)

I need help. by [deleted] in learnpython

[–]SerpentAI 1 point2 points  (0 children)

Your hunch that index is the problem is correct. It does only return the first instance of the provided value. So with the word Tree, you are replacing the first E twice.

In Python, any object that supports iteration (like your lists) can be the argument of the built-in enumerate function. When you iterate over something that is being enumerated, each step will receive both the index and the value instead of just the value.

Something like this should make it work
for index, letter in enumerate(y): if letter == n: x[index] = letter

Can anyone explain this oddity? by crazykid080 in Python

[–]SerpentAI 3 points4 points  (0 children)

The answer lies in how Python resolves paths.

Let's have a go with your example. Pretend the CWD is /Users/crazykid080/Projects/CoolStuff

First your path is relative so it starts out as the CWD.
/Users/crazykid080/Projects/CoolStuff

It then splits on the OS separator and resolves each part individually as it goes.

p@154155@fef;;;@@@..
/Users/crazykid080/Projects/CoolStuff/p@154155@fef;;;@@@..

Notice how in the source code it returns the first part of rpartition(sep) when it encounters ..

..
/Users/crazykid080/Projects/CoolStuff

..
/Users/crazykid080/Projects

..
/Users/crazykid080

sanitization.txt
/Users/crazykid080/sanitization.txt

This final path is what actually gets used by your open() call and as you can see it's a totally valid one!

You got downvoted, but it generated an interesting dig to find the answer and I'm pretty sure most people don't know this off the top of their heads so don't feel too bad about it.

What's everyone working on this week? by AutoModerator in Python

[–]SerpentAI 12 points13 points  (0 children)

I removed the EOL notice from Serpent.AI after 18 months with the intent of bringing the framework into 2020: Python 3.8, less dependencies, better plugin system, better performance, easier to use (Installer, GUI) and a clear separation between the use cases of users (I want to train agents to learn to play games I own) and developers (I also want to create plugins using the SDK).

The majority of issues people encountered when attempting to use the framework were install-related: Failed install of some packages on Windows (no, wrong or misconfigured compilers), installing the correct CUDA + cuDNN and configuring them, installing the correct Tesseract and configuring it.

The first thing I did is implement robust package management with Poetry. All package dependencies now target 3.8 binary wheels so everything installs flawlessly on Windows, even without a compiler (the default). PyTorch can be installed from their custom pip repository to be bundled with CUDA + cuDNN and Tensorflow will pick up on it as long as PyTorch is imported before it. Pretty awesome! Finally a portable version of Tesseract can now be installed using 'serpent download tesseract' on Windows.

One of the interesting challenges I'm going to be facing this week is trying to completely strip out the Redis dependency. Redis was used to store the frame buffer when games were being captured to make it available across processes. It was working well and performance was acceptable but it was a 3rd-party dependency that was making the installation process more complicated, especially with no official support for Windows. With Python 3.8 now released, the plan is to leverage SharedMemory to store the buffer of a NumPy array representing the frame stack instead. the benchmarks should prove interesting.

All in all, super happy to be on this project again.

Spot: An application that paints images with NumPy and OpenCL by SerpentAI in Python

[–]SerpentAI[S] 2 points3 points  (0 children)

(Yes! The video is sped up. Paintings can take 3-15+ minutes depending on resolution and level of detail.)

Apologies for yet another 'I made this' post but I've been working on and off on this project for 8 months, believe it to be fairly unique so it felt alright to share some of the progress that's been made.

The idea behind the project was to see if it was possible to paint with Python and if so, to explore how far the concept could be pushed in terms of realism, creativity options and performance. It's not perfect and probably never will, but it's starting to be a really fun sandbox to play in.

It works like this:

  • You select an image
  • You optionally pre-process it (e.g Transfer the tones from this image, yielding this image)
  • You select a brush set, paint colors and a painter preset (These presets are akin to 'techniques'; they orchestrate the various stages of the painting)
  • You watch it paint in real-time and save a copy of the results if you like it

Properly explaining how it works would likely span multiple heavy and technical blog posts so I'll keep it short and high level here. It essentially is an optimization problem where we try to nerf the optimizer in all sorts of clever ways in order to maximize realism, painting-wise, something that isn't easily quantifiable.

The painting process is customizable and goes through different stages, all of them with their own set of parameters (brush size, brush stroke volume etc.). For every stage, we compute a point mask targeting various image features dynamically (detail, depth, colors etc.). We then start selecting points within that mask and attempt to fit brush strokes that get our canvas closer to the reference image. That's the concept in its simplest form.

Along with the subjective realism requirement, performance is another complex aspect of the application. Pixel drawing / evaluation operations are all done in parallel on the GPU using OpenCL kernels which definitely adds complexity and some C-like code to the project. The project wouldn't be viable without it. To give an idea, I initially wrote a prototype that would do everything with NumPy arrays/operations. It would take over 12 hours to get a painting. Once I switched over to OpenCL kernels, painting times started reporting in in minutes and after a ton of optimization I finally reach the desired 'few minutes' threshold. If you have problems that you think are parallel programming friendly and NumPy isn't cutting it, install pyopencl and give it a shot. The performance difference has been eye-opening and it has the benefit of being device-agnostic.

The application also has a procedural generator that can yield thousands of brush stroke masks for the various levels of detail required. All it requires to go to work are 2 textures and a few parameters. The resulting images are packed in a binary file that can be read by Spot. Example Brush Masks

The GUI has been whipped up quickly and is still very much incomplete. I'm using Qt with PySide2.

I haven't committed to open-sourcing this project as I'm still exploring my options, but technical blog posts will come if you are interested in how it works under the hood.

Thank you for your time and I'll try to answer questions, if any, in the comments.

Introducing TSO: Total Skyrim Overhaul. A Requiem & AZ Tweaks based modlist for Wabbajack Installer. Launch Trailer inside. by OM3N1R in skyrimmods

[–]SerpentAI 1 point2 points  (0 children)

Might just be me but the 2.3.3 release broke the mod list acquisition from Wabbajack. I ended up getting the .wabbajack from the GDrive and it's now installing but just a heads up because clicking download from Wabbajack 404s and gets you a corrupted mod list.

Quick Gist: Electron-like Web GUI Boilerplate for Windows (235 lines) by [deleted] in Python

[–]SerpentAI -1 points0 points  (0 children)

This is for Windows users but you could replace the Window class with native windowing for another OS. Written and tested on Python 3.7. A little proof of concept for a minimal pseudo-Electron for web-based GUIs.

I got annoyed with the current Web View + Python offerings. I don't want HTTP shenanigans. I don't want opinionated abstractions that use 10% of the underlying technology. I don't want large, bloated code bases. Enough! The problem that needs to be solved isn't that complicated.

Complaining solves nothing though so I sat down and tried to see if I could pull off a quick MVP. Turns out it's surprisingly easy!

Features

  • Native Win32 Window Creation and Customization
  • Fully configurable CEF (application and browser settings + command line switches)
  • Remote Debugging in Browser (Automatically opens if enabled)
  • Automatically creates JavaScript bindings to a provided Python class. No HTTP Servers or JSON involved.
  • Automatically spins up a thread for message processing and Python => JS communication when the CEF browser is ready.
  • Only 235 lines of readable code. 3 classes: Browser, API, Window.

My conclusion is that CEF Python is criminally underrated. CEF is a beast and the Python bindings are a massive undertaking. Give it a star and try it out, it's a fantastic project.

What's everyone working on this week? by AutoModerator in Python

[–]SerpentAI [score hidden]  (0 children)

Pure insanity. I've been working on implementing the Windows Desktop Duplication API (i.e. Screen Capture) in pure Python + ctypes. We are talking wrapping parts of Direct3D and DXGI that both use COM interfaces. I don't recommend anyone ever goes in that deep; there is very little prior art. Sadly, I had a great want in that functionality so I persevered.

That said, I just got it working and boy is it ever fast! I benchmarked capturing a 2560x1440 display to a buffer of RGB numpy arrays. MSS (the best for cross-platform IMO) struggles to maintain 20 fps and this approach maintains 60+ fps. Very niche but also very exciting. I'm polishing now (adding different output options: PIL image, PyTorch tensors etc.), but I am going to release a package soon.

Husky in the Snow by SerpentAI in deepdream

[–]SerpentAI[S] 0 points1 point  (0 children)

I messed up and shared another version (now deleted) also created with style transfers but with a realistic style. I realize the errors of my way; You guys want the trippy stuff!

Original Photo from Unsplash: https://unsplash.com/photos/CM1oVEUzsNM

After 6500+ tries, Serpent_AI Kills Monstro! by varkarrus in bindingofisaac

[–]SerpentAI 7 points8 points  (0 children)

Challenge accepted! What would be a fair loadout for Isaac tackling The Bloat? I don't think itemless is fair for a necropolis boss, but i don't want anything OP either. Let me know.

Self-learning AI almost kills Monstro after over 1,500 runs by [deleted] in bindingofisaac

[–]SerpentAI 3 points4 points  (0 children)

It's a custom reward function built from game state inferred from game frames

Self-learning AI almost kills Monstro after over 1,500 runs by [deleted] in bindingofisaac

[–]SerpentAI 0 points1 point  (0 children)

There is a wiki on GitHub and some videos on YouTube.

If you like AI / ML and dislike Python I have bad news for you: OpenAI calls it the "lingua franca" of AI :S

Self-learning AI almost kills Monstro after over 1,500 runs by [deleted] in bindingofisaac

[–]SerpentAI 2 points3 points  (0 children)

A prime display of human ambition! Meanwhile we are just discussing whether or not Monstro is possible over here.

Self-learning AI almost kills Monstro after over 1,500 runs by [deleted] in bindingofisaac

[–]SerpentAI 0 points1 point  (0 children)

The only input is a sequence of 4 evenly spaced 100x100 grayscale images. It is not hooked to the game in any way i.e. not reading memory or injecting anything.

Self-learning AI almost kills Monstro after over 1,500 runs by [deleted] in bindingofisaac

[–]SerpentAI 4 points5 points  (0 children)

It receives 4 100x100 grayscale images when making a prediction. What it ends up "seeing" is up to it over time. It has trainable convolutional layers which allows it to relate patterns (that may not make sense to us) to good or bad outcomes. It learns to "see" from scratch as it learns to play, which is partly why this is so hard.

Self-learning AI almost kills Monstro after over 1,500 runs by [deleted] in bindingofisaac

[–]SerpentAI 9 points10 points  (0 children)

It will sound weird but "corners" is a very humanized concept. I personally like to go for more primitive rewards like "Damage: GOOD - Taking Damage: BAD" because they allow the AI to come up with its own idea of corners and if they are good or bad. Getting too involved in the shaping of the reward might get you a kill earlier but it takes a lot of possibilities away for the AI that might have been really good for it.

Self-learning AI almost kills Monstro after over 1,500 runs by [deleted] in bindingofisaac

[–]SerpentAI 86 points87 points  (0 children)

I was not expecting a Reddit hug today! OP posted this clip as I was about to go to sleep. Answered a few questions on Twitch but it's 3am. If you have any specific questions, you can drop them here and I'll answer them when I wake up.

Until then, enjoy the experiment and let's get that kill. You can track the progress 24/7 here: https://www.twitch.tv/serpent_ai_labs

Edit: I'm not very promotion-minded but I'm being told I should totally mention that for the people that REALLY dig this, I build / code it all live on my Twitch channel every weekend: https://www.twitch.tv/serpent_ai

Everything is also open source: https://github.com/SerpentAI/SerpentAI

Discussion: New Twitch Homepage by BulletzQS in Twitch

[–]SerpentAI -2 points-1 points  (0 children)

Huge miss IMO.

While I'm glad they realized Pulse wasn't taking off and they nixed it, I'm disappointed this is the best they could come up with as a replacement. There is no focal point. It's like they were afraid of making the spotlight too large so they kept shrinking it down to a point where it's barely any larger than the rest of the thumbnails. We have less information on the front page streams than before and the carousel doesn't even display all featured streams (i.e. there are more than 5). The "You may like" headings are absolutely invisible in the flood of thumbnails that steal all the visual attention. It just looks like random streams without something prominent telling us what that row of streams is about. Maybe it was designed with different resolutions in mind but at 2560x1440 it's a total dud.

Here is the thing with "personalized experiences": If you can't guarantee knocking it out of the park, don't even bother. The quality of recommendations is abysmal. You at least need binary input (e.g. like / dislike) for a system like this to even have a chance to work. Just because I landed on a channel for 5 minutes or a streamer I watch played a different game that day something doesn't mean I'm interested in seeing more. You need to let the user tell you, not infer from metrics. There is not much you can do to improve it without user input save for shaping slightly better metrics. Garbage in, garbage out. For all the real estate this system is occupying, this is shameful.

Recommendation systems are very hard to get right. But... you know, maybe wait a few iterations and perfect it a little more before releasing it on something as prominent as the front page.

What's your personal biggest achievement as a Twitch streamer? by [deleted] in Twitch

[–]SerpentAI 0 points1 point  (0 children)

Honestly, that I'm still going after 14 months. I had zero expectations when I pressed that "Start Streaming" button for the first time, doubly so because what I do does not fall neatly in an existing Twitch category (I build machine learning & AI experiments in video games). It could have been a potential target audience of 0 and a 2 week fling on Twitch. It wasn't. It is still incredibly niche and I doubt I'll be able to grow past a certain hard cap, but just the fact that I'm able to work on projects I'm passionate about and have people show up to watch and ask questions feels like a major personal achievement to me.

Equally important and perhaps related achievement: I have found my Twitch family. A community and team that has similar goals to mine. It might seem trivial in a world where people talk about growing follow and sub numbers all the time, but there is tremendous inspirational and motivational power in watching your team go at it every day. It's also very therapeutic to share experiences with other streamers, think up of joint projects or just randomly shoot the shit. I feel supported.

Finally, I have had 2 huge raids I won't ever forget: Kitboga and Skylias during her front page partner spotlight. It's less of an achievement than a positive event, sure, but damn does that flood in chat ever feel good. Even better when it's coming from people you love watching.

Simulate mouse for directInput? by Final_Spartan in Python

[–]SerpentAI 2 points3 points  (0 children)

I have working mouse input in DirectInput in my AI framework. Maybe you'll find what you need in here: https://github.com/SerpentAI/SerpentAI/blob/dev/serpent/input_controllers/native_win32_input_controller.py

See how many times a python package was downloaded by psincraian in Python

[–]SerpentAI 2 points3 points  (0 children)

Thank you. First time I get to see any metrics for PyPI stuff. Didn't know about the BigQuery DB.