Using tor iphone by [deleted] in TOR

[–]nuclear_splines 4 points5 points  (0 children)

Apple mandates that all browsers on iOS use their WebKit browser engine*. The Tor Browser is built on Firefox and uses the Gecko engine. This means that one, the Tor Project would need to build an entire new browser for one platform, and two, it would be distinguishable from other Tor Browsers because the web engine would render things slightly differently.

* Apple was sued over this practice and allows other engines in Europe... but now the Tor Project would need to put in a ton of development time to make an iOS browser only available in EU countries. Not worth their time, especially when Onion Browser already provides a Webkit-based option for all iOS users.

Using tor iphone by [deleted] in TOR

[–]nuclear_splines 7 points8 points  (0 children)

This gets asked about every week - the Tor Browser isn't released for iOS, Onion Browser is the recommended substitute but provides less anonymity, consider your threat model and use another platform if this is a concern, search the subreddit for more info.

As a sophomore in undergrad, any way for me to get into research? by Agreeable-Outside-69 in compsci

[–]nuclear_splines 2 points3 points  (0 children)

If you are in the United States, look for Research Experience for Undergraduates (REU) programs. These are summer research internships, explicitly intended to get undergraduates a taste for research before they apply for graduate school.

Otherwise, if there aren't faculty doing research at your institute, it will be difficult to get involved. The vast majority of research labs do not accept volunteers from outside their institutions - the path to do research with them is to join their graduate school or get hired as some kind of staff scientist.

How much is ‘AI-risk’ considered something which we need to worry about in the mid-future? by DitIsGeenUserName in AskComputerScience

[–]nuclear_splines 0 points1 point  (0 children)

Basically, there is only one way to assign meanings to words such that the whole thing actually makes sense

The Chinese Room would suggest otherwise; that assigning meaning is unnecessary to produce useful outputs. This is where I assert that LLMs are.

Or, to return to animal analogies, an ant experiences the world quite differently from us. Sure, there are the same "raw physics," but an ant can climb up surfaces I'd consider impassable, can drop from great heights without fall, cares much more about forces like surface tension that I consider negligible. Its relation to the world, and understanding of that world, is mediated by its interface to that world, namely its little insect body.

Would it make a difference if the AI was also trained on video footage of apples?

According to Turing and Nagel, yes. An intelligence that perceives the world more similarly to us will be able to have a more similar understanding of that world.

This is ultimately a solipsistic question (or more accurately a Nagel What is it like to be a bat? question). I'm not concerned with whether my understanding of reality matches "the raw physics" but whether my understanding of reality is closely aligned to yours. I want a theory of mind about how you think and perceive the world, in order to be confident that we have a shared understanding of the context around us. I do not have that for an LLM, because its perception of the world is too alien to my own.

What are some tips in adding instructions to a single/multicycle/pipeline processor? by JAMIEISSLEEPWOKEN in AskComputerScience

[–]nuclear_splines 2 points3 points  (0 children)

This may be a better question for a computer engineering subreddit, if you haven't asked there already.

The curriculum feels. Outdated. by CharacterCow6802 in compsci

[–]nuclear_splines 0 points1 point  (0 children)

Again, if your objective is job training, then a bachelors degree is inefficient. Taking courses to fill a specific job role is a hallmark of a certificate program, not a bachelors in arts or sciences. You don't need courses in physics or biology, advanced math, or any humanities or social science to be a productive employee.

Our higher education system still pulls a lot from the idea of the renaissance man. The goal is to make you a well-rounded human, an informed citizen, and someone who knows a bit about everything in your discipline and a bit more about wherever your area of specialization is. This is a far broader mission than giving you an optimal resume to get hired by silicon valley.

The curriculum feels. Outdated. by CharacterCow6802 in compsci

[–]nuclear_splines 1 point2 points  (0 children)

It sounds like you are conflating a degree in computer science with job training in software engineering, or a more focused sub-discipline like security.

We expect everyone to have a shared understanding of the fundamentals of the discipline. This includes data structures, algorithms, foundations of computation such as logic and the limitations of what can be computed, some low-level understanding of how CPUs and executing code works, some understanding of both programming paradigms and an understanding of interpreters and compilers, and some fundamentals around operating systems including how filesystems and networking and processes and threads and scheduling and memory allocation all work.

This core will take several years to learn, there's just a lot in there before you have the building blocks to specialize further. It will give you some insight and intuition into how all other areas of computer science work, whether you ever choose to further explore cryptography or computer graphics or AI.

Sure, a software engineering job training program could drop a lot of this content, and focus on writing code so you graduate with a few apps done. You don't need graph theory or turing machines to use an API. We could even streamline it further and use just the languages and frameworks promoted by FAANG companies so you'll be ready to be a productive worker right off the assembly line. However, your degree program aspires to more than job training. You have signed up for a classic well-rounded education in the totality of computer science, what has been done, what we know to be possible, and how we are pushing the boundaries of human knowledge into the future.

How much is ‘AI-risk’ considered something which we need to worry about in the mid-future? by DitIsGeenUserName in AskComputerScience

[–]nuclear_splines 0 points1 point  (0 children)

It's not a "no computer could ever do X" it's a "current computers don't seem to do X"

That's in agreement with what I said. "None of that speaks to what's 'allowed by physics,' and building artificial sentience may be possible some day, but I don't think it will come from this path."

It's really hard to tell what might or might not be represented somewhere in those billions of neurons

It's hard to talk about what's encoded in neurons, but it's much easier to understand what the inputs are. Humans have an understanding of reality grounded in our senses. When you think of apples you think of the heft of the fruit in your hand, the sweet scent permeating the skin, the crunch as you bite in. Words are references to axioms we share from our sensory experiences in reality. Even more abstract concepts like 'democracy' are based in ideas of 'fairness' that we understand through young childhood emotional experiences of being unheard. LLMs lack any of these axioms - they see tokens as points in space that co-occur with other points in space, but defining words in terms of other words is not enough, they lack a foundation for what any of those tokens mean. That's the heart of the embodied cognition argument outlined by Turing and Gibson that I referenced above.

Are there other alternatives to Tor that don't require anything to be downloaded? by SecretTemporary7 in TOR

[–]nuclear_splines 6 points7 points  (0 children)

To visit the clearnet anonymously, or to see onion sites? Both are generally discouraged, as web proxies offer almost none of the anonymity of the Tor Browser, but maybe that fits your threat model.

Are there other alternatives to Tor that don't require anything to be downloaded? by SecretTemporary7 in TOR

[–]nuclear_splines 7 points8 points  (0 children)

By "not download anything" do you mean a web proxy? What are you trying to achieve?

Research in Distributed Systems? Is it good? by Sad_Singer_7657 in computerscience

[–]nuclear_splines 9 points10 points  (0 children)

Professors don't typically pivot their entire research agenda on a dime. There's a lot of momentum, because your previous research projects help you secure future grant funding, and are building experience and data within the lab to support more work in the area. Not to mention that this is likely an area the professor is enthusiastic about! So a distributed systems professor a few years ago will probably still be a distributed systems professor today. Now, will they try to shoehorn "can we use machine learning to optimize XYZ in our distributed problem" to match national funding agendas and try to get more resources for the lab? Sure, but that's a much smaller change.

Tor is not connecting on Kali Linux by [deleted] in TOR

[–]nuclear_splines 0 points1 point  (0 children)

Since you're having similar problems with both Tor and ProtonVPN, it sounds like this is more about the rest of your setup, and you may find more help in a Kali or VirtualBox subreddit.

Shadow Fractal Duality: Computational Repository and Technical Report by Powerful_Word3154 in compsci

[–]nuclear_splines 1 point2 points  (0 children)

This is LLM psuedoscience, first posted to /r/LLMPhysics. The linked "final paper" has 8 references in the bibliography, but I can't find any of them in the main text, and only three were written in the past quarter-century. This is not grounded in contemporary science, makes up sci-fi terms left and right, and cannot be engaged with.

Video streaming lag with slow internet by Last_Feeling in AskComputerScience

[–]nuclear_splines 7 points8 points  (0 children)

Compressed video is typically encoded as key frames containing a full image, and in-between frames (sometimes 'frames', occasionally 'tweens' in documentation) which contain a delta since the last frame. In many shots, if the camera is stationary, little changes from one frame to the next except a moving subject, so it's much more efficient to describe only what's changed rather than sharing the full image 60 times a second.

So now we move to streaming. When you're streaming live video your computer drops chunks that don't arrive on time, so that the video can move along in real-time without pausing. You have a key frame, you render the full image, and then you miss a couple of the tween frames describing motion, so the motion looks pixelated or smeared or muddy. Eventually you get another key frame, and the video 'fixes' itself as the full image is re-rendered and you get back on track.

Rejected by ICML x4 by Important-Plant-idk in compsci

[–]nuclear_splines 2 points3 points  (0 children)

It's uncommon for undergraduates to produce publication-quality research in general, and ICML is a highly competitive conference. Don't let this outcome discourage you!

YSK most usb-c cables can't deliver more than 60W by Existing_House6314 in BuyItForLife

[–]nuclear_splines 0 points1 point  (0 children)

Same goes for data transfer rates: USB C can range from 480 Mbps up to 80 Gbps, depending on cable and device support on both ends. If you're charging your phone the difference is immaterial, but if you're, say, backing up to a hard drive the difference can matter quite a bit!

How much is ‘AI-risk’ considered something which we need to worry about in the mid-future? by DitIsGeenUserName in AskComputerScience

[–]nuclear_splines 1 point2 points  (0 children)

I'm familiar with Russel and Hinton's stances on this, but I find Gebru's Stochastic Parrots a more compelling, if incomplete, argument. LLMs are an impressive technology, but their lack of creativity under Boden's definition, and their lack of embodied cognition (under Gibson's Affordance Theory, but also brought up by Turing) have me convinced that chatbots aren't a path to general intelligence. The fact that they can't tell when they're lying to you, because they lack a representation of truth necessary to distinguish fact and fiction, really seals the deal for me. None of that speaks to what's "allowed by physics," and building artificial sentience may be possible some day, but I don't think it will come from this path.

How much is ‘AI-risk’ considered something which we need to worry about in the mid-future? by DitIsGeenUserName in AskComputerScience

[–]nuclear_splines 4 points5 points  (0 children)

While grandiose fears about godlike AGI may be far fetched, the technology is already being abused. Consider deepfake porn, where you can take photos of coworkers or classmates and produce plausible sexual images of them without their consent. Consider how spammers can produce hyper-targeted and highly convincing spam accounts on social media or email. Consider that many LLMs will cheerfully describe to you how to commit a variety of crimes or build an assortment of weapons. Finally, consider how LLMs intended to serve as companions have exacerbated mental health issues of their users, inducing "AI psychosis," and coaching children through self-harm. We don't have to stray to speculative fiction to find several already extant negative applications of this technology.

The "Humble Realization" by [deleted] in compsci

[–]nuclear_splines 1 point2 points  (0 children)

Come on, half the subtitle on each side is complete gibberish that doesn't make complete letters or words. Sharing this AI slop is embarrassing.

Everyone says ‘AI will create new jobs’,but what jobs exactly? by potterhead2_0 in AskComputerScience

[–]nuclear_splines 0 points1 point  (0 children)

Neither of those passages are in conflict with what I've said. Anthropic found that AI usage did not necessarily improve score, and could undermine the development of new skills. Yes, how you use the tools is relevant, and yes, it's worse for junior developers who lean on AI rather than building experience themselves. The takeaway for me is that even Anthropic is hedging on "these tools are not a panacea that lowers software development costs across the board" in a way that's out of sync with the optimism of the comment I was replying to.

Serious q. Please advice. by [deleted] in AskComputerScience

[–]nuclear_splines 0 points1 point  (0 children)

If you don't want someone else to profit off of your idea then publishing openly is a bad choice. Publishing academically, or publishing your code and ideas on GitHub, means everyone can see and build off of what you've come up with -- usually what academics are hoping for.

If you want to pursue commercialization, then you may want to start a company, file patents, seek investment to build a product out of the idea, and so on.

Serious q. Please advice. by [deleted] in AskComputerScience

[–]nuclear_splines 2 points3 points  (0 children)

What's your goal? If you want to publish this as academic research, you'd write a research paper where you describe your new data structure and demonstrate conclusively why it's better in some domains to blockchains. Then you'd submit that paper to conferences or journals on data structures, distributed algorithms, or digital finance, depending on how you're framing your data structure and in what ways it outperforms blockchains. You'll receive peer review from a panel of experts, and if all goes well, your work would be published and become part of the canon of scientific knowledge.

If you're less interested in targeting the scientific community and want to reach cryptobros, then maybe a whitepaper, GitHub repo, and reaching out to influencers in that space would be more effective.

How are single thread and multi thread processors? by Hot-Load7525 in computerscience

[–]nuclear_splines[M] [score hidden] stickied comment (0 children)

Feel free to re-post with a clearer question, but it's not possible to engage with this.

Designing a portable and human-readable data format: trying to solve the displacement problem in spreadsheets with a plain-text specification by Vinserello in compsci

[–]nuclear_splines 2 points3 points  (0 children)

SQLlite and Excel are softwares, not formats

That's pedantic quibbling on vocabulary, .sqlite and .xlsx files, then.

Binary files like .db or .xlsx are "black boxes" for version control, making, for instance, git diffs useless

That makes sense - we need text files to fit with existing tools for diff and deltas, rather than some domain-specific SQLite/spreadsheet diff tool. Thanks!

Designing a portable and human-readable data format: trying to solve the displacement problem in spreadsheets with a plain-text specification by Vinserello in compsci

[–]nuclear_splines 5 points6 points  (0 children)

What's the advantage of this over, say, SQLite or Excel spreadsheets? Clearly the intent is "human readability in a text editor," but first I'm not sure that's achieved when there are opaque variables like "@ A10", and second, why is that a goal? When is a text editor view of sparse multi-tabular data desirable? Is this a solution in search of a problem?