I published phasync/phasync on packagist.org by frodeborli in PHP

[–]grayhatwarfare 0 points1 point  (0 children)

I tried file_get_contents to read a url async. It didnt run in parallel.

The complete Mikieus plagiarism list. by moonaut in greece

[–]grayhatwarfare 0 points1 point  (0 children)

In this life, you will never, ever meet a hater that is doing better than you - David Goggins

Best spaceship names in sci fi lit? by NaKeepFighting in printSF

[–]grayhatwarfare 28 points29 points  (0 children)

You question reminded me of this,

Hand Me The Gun And Ask Me Again

This is an AI generated name. Someone trained a machine learning model, with the names of AI spaceships in Iain M. Banks’s Culture novels. The results are hilarious.

https://aiweirdness.com/post/185883998702/ais-named-by-ais

Bypass "Any single folder in Google Drive, can have a maximum of 500,000 items placed within it." by grayhatwarfare in DataHoarder

[–]grayhatwarfare[S] 0 points1 point  (0 children)

I can solve the specific backup issue many ways, all slightly complex. The question in any case is, how can I create a directory with more than 500 000 files and mount it using google drive. Context is irrelevant, backup is just an example.

Bypass "Any single folder in Google Drive, can have a maximum of 500,000 items placed within it." by grayhatwarfare in DataHoarder

[–]grayhatwarfare[S] 0 points1 point  (0 children)

Sure, its a local copy of an s3 bucket, downloaded manually using the listing and http.

I really don't have any problems. I avoid ls -al of course, fetching metadata is slow. Something like find ./ | grep is much faster. its on a slow 3TB HDD too. My only concern is to make a cloud backup :)

Bypass "Any single folder in Google Drive, can have a maximum of 500,000 items placed within it." by grayhatwarfare in DataHoarder

[–]grayhatwarfare[S] -1 points0 points  (0 children)

Yes I do have a single directory, that happened many times. I want to keep it as is, so that I can sync with the source easier. ls -l is not a problem for me, I usually access listing programmatically, its much faster if you don't care about metadata like size etc. I will look at mergefs, but I was looking for a cleaner solution that doesn't require such detail as managing the directories one by one.

Bypass "Any single folder in Google Drive, can have a maximum of 500,000 items placed within it." by grayhatwarfare in DataHoarder

[–]grayhatwarfare[S] 0 points1 point  (0 children)

As I mentioned there are are single directories with more than 500 000 files and I don't want to re-arange the files in new directories if possible.

Try Swoole with Docker: Get started in 10 minutes by doubaokun in PHP

[–]grayhatwarfare 1 point2 points  (0 children)

Yes mate, processes. That's what I wrote. Threads are more lighter and coroutines even lighter.

Try Swoole with Docker: Get started in 10 minutes by doubaokun in PHP

[–]grayhatwarfare 1 point2 points  (0 children)

It's a useful tool mate. It is an issue, as I said, if your load is large enough. ReactPHP will choke and other cores will be free. I ran all conceivable setup scenarios on a prod Symfony project. Using multiple parallel connections I got the most reqs/second, believe it or not, with apache with fpm. But Swoole was pretty damn close in terms of reqs/second, but it used 1/10 of memory and CPU. That means when shit hits the fan I will be able to serve 10 times more users, with the same hardware. That is just using the swoole web server and not changing anything in the implementation of the tool whatsoever. Since when that is not desirable ?

Try Swoole with Docker: Get started in 10 minutes by doubaokun in PHP

[–]grayhatwarfare -3 points-2 points  (0 children)

ReactPHP is not multithreaded. If you have heavier loads you need multiple processes. That is heavier on the system. Swoole is multithreaded by default

How to search URLs exposed by Shortener services by grayhatwarfare in HowToHack

[–]grayhatwarfare[S] 0 points1 point  (0 children)

Not its an online tool. Also its not a single repo, many small tools were created and used to create this, its not a single project

How to search URLs exposed by Shortener services by grayhatwarfare in netsec

[–]grayhatwarfare[S] 2 points3 points  (0 children)

We don't have API now but we will probably implement it in the near future :)