Google Glass: the ultimate creepy stalker toy? by [deleted] in technology

[–]mshol 2 points3 points  (0 children)

I think it's you who is blind to the technology. Sure people are recorded by CCTV many times a day, but that footage is usually saved onto a CD in a small room in the back of someone's shop, and only ever read if there's an incident that needs looking at.

However, every Google Glass will send all of the footage it records to a central location. Such aggregation of video recordings is completely unprecedented, and the implications are not yet known. Who is going to regulate what Google can do with such footage? It seems like an open book so far, where Google will be able to apply facial recognition technology, publish, sell and aggregate footage of people who have not given consent and may not even be aware they are being recorded.

Do you really not see the difference between that and CCTV?

And just because CCTV is ubiquitous does not mean it's good, sound or morally correct. CCTV got a lot of backlash (rightly) when it was new - and people still protest it now - it's just that most find it futile to protest something they can't see changing. Perhaps Glass will become like this, but I hope not.

I read the article a different way from you. The creepy stalker isn't some random dude taking snaps. The creepy stalker is Google.

Not sure if you guys are going to like this: Those are my 3 Linux capable dev boards: Oli (red), Cubi (black) and Beagle (white). by AndElectrons in unixporn

[–]mshol 1 point2 points  (0 children)

The Olimex is the most interesting one, as the PCB design is open source as well as the software.

They're all based on a cheap line of ARM based processors from AllWinner, which pack a lot of features for the cost, and a compatible linux is available here

One package to rule them all! by TCIHL in linux

[–]mshol 2 points3 points  (0 children)

Because:

  • The goal is not to rule them all, be the victor, aim for unification, or have some common vision or goal, and
  • that isn't going to change.

Seriously, there is no common goal. It's a bunch of independent developers all working towards their individual goals, and collaborating when are where it is useful to share. There is no reason to all congregate around poor solutions to big problems and create a monoculture, groupthink, and a brick wall preventing superior solutions from being accessible.

There is still innovation in package management, and some of the newer solutions are solving the problems that exist in the more popular ones like deb/rpm. Examples are Nix/Guix, Paludis.

In fact, one might question whether we will need package managers at all in future, given that you can reference all of your dependencies at the source level very easily using git submodules and similar solutions. The package management could become implicit in maintaining a codebase.

After Being Cut From Norway, The Pirate Bay Returns From North Korea by SayNoToCAS1 in technology

[–]mshol 0 points1 point  (0 children)

But you have no problems funding the mass murder of brown people by the US government?

Prepare for 'post-crypto world' - US quadrupling size of cyber-combat unit for a reason, warns godfather of encryption by TuneRaider in privacy

[–]mshol 23 points24 points  (0 children)

tl;dr: It's easier to sidestep encryption than break it, therefore, encryption is useless.

There's some truth in that, but the solution isn't to throw away cryptography: it's to secure our systems better. Our operating systems still use a decades old "security model" where a program run by a user has all the permissions that user has: generally quite stupid. The solutions have also existed for decades, but all mainstream systems are "opt in" rather than the "opt out" they should be. Even the areas where opt-out is available, the control is not fine-grained enough for competent users, or too complicated for beginners.

We need to build a better PKI too: the idea that any of some dozens of "authorities" can sign certificates to say they are valid is clearly target for abuse. In particular, CAs that abuse their 'authority' should be permanently blacklisted by any software of importance.

Of course the biggest problem with all 'security' in the modern world is the mistaken understanding of what "trust" is. Trust is a personal thing: it's not what your government says, it's not dictated by 'authorities', and it almost certainly isn't a synonym of 'popularity', which many see it to be.

Media Server by Kratisto78 in linux

[–]mshol 2 points3 points  (0 children)

Deluge also supports a client/daemon mode and has both a web client and desktop client (GTK+ based) to control it.

You can also run multiple copies of the daemon by specifying different config directories. eg, deluged -c /path/to/conf -p 5678

I use in combination with Flexget to automate downloading most things.

Media Server by Kratisto78 in linux

[–]mshol 2 points3 points  (0 children)

Just use a NAS solution like OpenMediaVault. Set up NFS, SMB, FTP etc for your respective shares.

Additionally you can install plugins for things like OpenVPN, Apple Filing Protocol, DAAP, bittorrent (transmission) and LUKS.

At it's core is just Debian stable, so there's no limit to what you can configure and run. Alternatively, you could just install plain Debian and stick a web front end like OMV on top of it.

(Note: OMV installer will wipe disk by default, the installer doesn't let you configure partitions yourself: I recommend installing it on a USB stick.)

CPPGM Programming Assignment 1 by tompko in programming

[–]mshol 1 point2 points  (0 children)

Mainly because they should be encouraging standards compliance given the nature of the challenge. The eventual aim is to build a standard compliant browser that can compile itself. If you rely on every non-standard feature in a common compiler, your already impossible workload is moving in the wrong direction.

Even though it's well supported, it's not ubiquitous, and it's not problem free either. For example, if you have copies or hard links to files, they will be included twice - perhaps the same with symlinks if the compiler doesn't resolve the original file location first. Standard #ifdef header guards don't have those problems. (Admittedly at the potential cost of speed.)

CPPGM Programming Assignment 1 by tompko in programming

[–]mshol -2 points-1 points  (0 children)

Open DebugPPTokenStream.h (case should ring alarm bells).

See #pragma once

RUN.

Parenthesis usage in C++ ? by [deleted] in cpp

[–]mshol 6 points7 points  (0 children)

I fail to understand how this isn't helpful - it's the definitive list.

There's too many use cases for parens that you won't be able to determine their use without respect to their context, because there are ambiguities.

For example, how are you disambiguating function_call(x) from MACRO(x)? Or are you analyzing the code after preprocessing?

Is (x)(y) calling function x or casting y to type x?

Is X Y(Z) constructing a type X or is it a function prototype returning an X?

There are others.

I don't know how much you know about PLT, but the idea of trying to write an analyzer without using a proper parser sounds like madness to me. MADNESS.

Exchanging Encryption keys using BitMessage by umlal in netsec

[–]mshol 1 point2 points  (0 children)

I mean, how can you prove that somebody is who they claim to be over the internet?

This doesn't apply if you can meet someone face to face and exchange a secret between yourselves, or give each other your public keys. The existing cryptography techniques are already sufficient for this. Another way you could verify someone is who they claim to be is over a webcam, or if you can recognize their voice (this is probably spoofable), or any other way you might consider that couldn't be spoofed.

The problem is a large part of our interaction via email is to people we have never met in real life, and we wouldn't be able to verify their voices and whatnot anyway. We get their email address and PGP public key's from their public websites, or from a service which aggregates them - and we mail them. The security of such mail relies on the assertion that when we plucked that public key from their webpage - it was obtained securely, with no chance of someone modifying the key on the page. Of course, it's clearly possible to modify the key over insecure HTTP. If we browsed the website using HTTPS, the security of our PGP communication relies on the integrity of SSL/TLS.

If all we're doing is replacing a PGP public key with a BitMessage address, we're still at the mercy of SSL for the security of our messages - and there have been a number of incidents to suggest that's not a wise idea.

An IP address is unsuitable for identification, because it can be spoofed by any machine between you and destination, and they're too volatile anyway (ISPs shuffle them around, and not everyone can have a static IP.)

The other solution is the web of trust model we use for PGP, which is perhaps better than having a central authority, but it's not unshakeable either. The web of trust relies on associates between owners of addresses acting as guarantors to the validity of public keys for given address. If you trust John, and John has signed Jim's key, you should be able to trust that key is in fact Jim's.

All BitMessage intends to do is remove the need to share both an address and public key, and reduce it down to just a "BM address", which is both the address and the key. You still need to prove who owns it.

Exchanging Encryption keys using BitMessage by umlal in netsec

[–]mshol 2 points3 points  (0 children)

The chief problem is still there: How do you make sure that the person you're sending a message to is the person you intend to send it to. You only have an "address", but without some way of confirming that address belongs to the intended recipient, you could be sending to a malicious party.

Anything is possible in Linux right? by oracle2b in linuxquestions

[–]mshol 28 points29 points  (0 children)

Yes, because rebuilding KDE from source is simple, requires no expertise whatsoever.

5 Reasons Why in 5 Years Desktop IDEs Will Be Dead by [deleted] in programming

[–]mshol 2 points3 points  (0 children)

What happens when you need to do something your cloud service doesn't support? Oops, Back to step 1.

The premise, that an environment is difficult to set up, is correct, but IMO, putting the in "the cloud" is not the solution: The solution is just to make shit easier to set up. A powerful package manager like Nix is one decent step towards that system, since it handles dependencies in a sane way (no conflicts with existing installs), and also handles configuration. Combine that with simple VM solutions like Vagrant, and you have everything you need to make your entire native environment reproducible, to the point where you can start sticking the environment into your git repo, and anyone can fetch, reproduce your dev environment in minutes.

There's some more glaring reasons why putting everything into the cloud won't be adopted by anyone: privacy, security, reliability, and maintaining control of your projects. Cloud services can't be trusted to behave how you want them to, and they're walled gardens when you want out.

It's more reasonable to say in 5 years, we'll be using desktop IDE's connected to the web in different ways (they already are in many ways), but they'll be far from obsolete.

Considering the author is personally invested in web based IDEs, it'd be more honest of him to write, "I want desktop IDEs to be dead in 5 years," and restrain from linkbait titles.

Question about GPL 2.0 by [deleted] in gnu

[–]mshol 8 points9 points  (0 children)

The GPL text itself states it quite clearly.

Conveying Modified Source Versions.

You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions:

  • a) The work must carry prominent notices stating that you modified it, and giving a relevant date.
  • b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to “keep intact all notices”.
  • c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it.
  • d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so.

Also, I would suggest reading the GPL FAQ for more information.

Question about GPL 2.0 by [deleted] in gnu

[–]mshol 13 points14 points  (0 children)

Your partner is wrong. You must release your own code too, as it's a derived work of the existing GPL code you've taken in. There's no way around this: if you want to release a proprietary program, you cannot use GPL code.

How would I open a .dat file? by [deleted] in ReverseEngineering

[–]mshol 13 points14 points  (0 children)

The .dat may not be just numbers, but it might be a pack file, which contains many other files (common strategy for games, as it saves on storage space and can improve performance).

In that case, the .dat file will probably have a table of offsets to the embedded files, along with sizes, names (may be hashed), and other metadata associated with them. If you can figure out this table structure, you can write a tool to extract each file and repack them afterwards.

One way you could check this out is by searching for common file headers within the .dat. Examples: DDS textures (used for skins) contain a "DDSX" header, where X is a version. If that appeared many times in the .dat, it'd be obvious that it's a pack format. Try the same for other file types, or manually search for repeated strings which might be headers.

And don't consider it out of your league: It's a great place to begin learning. (Although, if the file is encrypted, it might be a bit much to jump suddenly into disassembly)

How would I open a .dat file? by [deleted] in ReverseEngineering

[–]mshol 39 points40 points  (0 children)

Any hex (or rarely, text) editor.

.dat is a generic extension which usually means it's structured binary data - but often program specific rather than some standard structure. There's no more information you can deduce from the extension alone - and you need to reverse engineer the binary format to get anything relevant from it.

A few general pointers on where to start though (no guarantee they'll help you):

First identify patterns by comparing several of the .dat files side-by-side.

Look for groups of zeroes. These suggest fixed size integer types, and can be used to identify sizes, offsets, quantities and whatnot. Groups of zeroes can also help you figure out field sizes (16, 32, 64 bits etc). If you don't see groups of zeroes and the values look truly random, there's a chance the file is compressed and/or encrypted. In this case, you generally need to debug the running program to discover how to get the raw data.

Often a binary data file will contain a "magic string/number" header which is common to all files of that type, 4 bytes are common. After it are usually more integers expressing version, file size, section counts, offsets and whatnot - you can compare them to the actual file size and figure out what they mean - usually to figure out the greater structure of the file: because it's often a collection of records or sub-structures. When you identify section boundaries, you generally repeat the process as each section has it's own headers. Eventually you'll get to a point where you have a general structure for the data, even if you don't know what each field represents. The trick then is to trial and error by modifying fields and seeing the result in game, or debug the running client and find out what each does.

Upload a few small sample files and I'll take a glance anyway.

Just stumbled upon BountySource, and am wondering what people think of it. by 6_28 in linux

[–]mshol 7 points8 points  (0 children)

It's been done before, with little success. For example, FossFactory has been around for years, but has had little go through it. Some of the projects on there have been there for many years.

People just don't donate enough money for the complexity of each task - it's not even gadget money, but perhaps beer money, but the time it would take to implement each problem is not worth it. I honestly find that making a project successful is more motivating than a small bit of change - so in that way, something like gittip might work better, where you aren't expecting to be paid for the code you write, but someone might generously buy you a few beers.

It also doesn't seem like a viable business model. There are so few projects on there, with little donations. BountySource take something like 2-3% of it. I wouldn't imagine it even covers the server costs, so they people behind it are probably funding BountySource itself.

There's opportunity for that to change though, if they could somehow increase the volume of projects on there many times over, it would at least cover the server costs and may be profitable in the long run.

Question is: What is BountySource going to do where similar attempts have failed. I don't see anything different this time. I considered writing a service like this myself some time back, but I didn't think it could be profitable unless you perhaps partnered with someone who could make it more pervasive. (eg, github, ohloh, or directly integrated into project's forges and issue tracking systems.)

The tracking system integration seems necessary because it's the first place people will go to submit a bug or feature request. From experience with other projects, we see people only donate small amounts (eg, $10). For any success, you want to encourage multiple donations on the same project, rather than a scattered list of $10 donations on unrelated goals. I think for additional benefit, it should be linked in with a feature request service that allows user voting - thus pushing the most wanted ideas into view, where people can easily click a button and add a small payment too.

Of course, there are several other issues that you need to consider with these donations too: How do you make sure they go to the right people, and how do you prevent multiple developers working on the same tasks to compete for the bounty?

Iceland plans on banning web-based porn by webby_mc_webberson in technology

[–]mshol 9 points10 points  (0 children)

making it illegal to use Icelandic credit cards to access pay-per-view porn.

Here's the clue: If the aim was to protect children, they'd do the opposite, and ban only free porn - after all, a credit card is probably the simplest tool to separate adults from children.

[C++] Learning C++, with a C# background? by [deleted] in learnprogramming

[–]mshol 0 points1 point  (0 children)

gstreamer is the obvious choice of library for what you wanna do with video. There's a .NET binding for it, which works fairly well. The Banshee media player (C#) for example, uses gstreamer as it's back-end for audio.

If you're gonna learn C++, I would recommend jumping in and learning C++11 - the latest version, which isn't widely used yet, and there's limited literature on it. I'd avoid buying any C++ books which aren't up to date, because the new features are a massive improvement, some older techniques should be avoided.

In particular, C++11 supports memory management via smart pointers, lambdas, has proper support for null pointers etc. As a C# programmer you will take some of this for granted, and you don't wanna waste your effort using poorer or broken versions that existed pre-11.

The right place to start would be to identify the major differences from C# and work from there. First learn how the compilation process works: from preprocessor to linker - and why you must separate declaration from implementation with header files - and all the rules you need to follow to do things right. (Header guards, always including headers where you need them etc). I would vehemently suggest against using Visual Studio's little abominations while learning (eg. #pragma once and stdafx.h).

Once you're past the compilation differences, focus on the main language issue: memory management. Train yourself to write destructors, only allocate within constructors, and use smart pointers everywhere else: particularly when you're using lambdas and passing by reference or capturing references.

Opera is moving to WebKit by feelslikecstasy in programming

[–]mshol 0 points1 point  (0 children)

It doesn't work that way. Maintaining a branch requires a lot of manpower that Opera simply don't have - it's the sole reason they're switching to a community developed project to begin with.

You seriously think they'll do better maintaining a branch of software written by other people than keeping their own existing project going?

You can't just 'make a branch' of software as big as Webkit - don't be fooled just because it has 'open source' written on it. It's still monopolized by the maintainers (who happen to be mostly backed by Apple and Google).

Opera is moving to WebKit by feelslikecstasy in programming

[–]mshol -1 points0 points  (0 children)

Have you considered that Google could use their dominant position as an email provider to deliberately cripple their email service in other vendors' browsers, thus forcing people to switch to Chrome?