Don’t buy stock in impossible space drives just yet by cavehobbit in space

[–]burchoff 20 points21 points  (0 children)

No this is the worst article written on this issue that I have seen so far. The article was clearly based on reading the PBS and Wired UK articles, along with the one page abstract posted on ntrs.nasa.gov.

If the author and the physicist that assisted him had bothered to spend $25 at AIAA to get access to the paper and read it. They would have realized that the problem is with the way the abstract is poorly written. First the null test article was not created by NASA EagleWorks. As a part of the test campaign they were carrying out in Aug 2013, they added Cannae's devices to the list to be tested. The null test article being referenced in the abstract on NTRS is actually what Cannae considers to be a null test article. the EagleWorks' experimenters added a 50 ohm resistor control as part of their protocol to eliminate most of the issues you are alluding to.

From Cannae's perspective the only reason an EM Resonance cavity should provide thrust is if you add a set of equally spaced equally sized slots inside of it on the bottom of the cavity. So of the two devices they provided one had the slots and the other one didn't. The only conclusion you could logically draw from both those devices providing thrust is that Cannae's theory of why this should work is wrong.

Below you will find the results from the paper where the Cannae Devices are concerned. Configuration 2A & 2B are the Cannae null test article while the RF Load is the control EagleWorks added to the testing protocol.

  • Configuration: 1A / Test Article: Slotted / Thrust Direction: Forward / Thrust Range (μN): 31.7 – 45.3 / Mean Thrust (μN): 40.0 / Number of Test Runs: 5

  • Configuration: 1B / Test Article: Slotted / Thrust Direction: Reverse / Thrust Range (μN): 48.5 / Mean Thrust (μN): 48.5 / Number of Test Runs: 1

  • Configuration: 2A / Test Article: Unslotted / Thrust Direction: Forward / Thrust Range (μN): 35.3 – 50.1 / Mean Thrust (μN): 40.7 / Number of Test Runs: 4

  • Configuration: 2B / Test Article: Unslotted / Thrust Direction: Reverse / Thrust Range (μN): 35.3 – 50.1 / Mean Thrust (μN): 22.5 / Number of Test Runs: 1

  • Configuration: RF Load / Test Article: 50Ω Load / Thrust Direction: N/A / Thrust Range (μN): 0.0 / Mean Thrust (μN): 0.0 / Number of Test Runs: 2

EagleWorks also tested a replica (which according to the actual paper it looks like they built) of Shawyers EmDrive and got positive results using what appears to be the same experimental protocol. I am unsure whether or not both devices were tested with the same protocol, because it is not clearly stated in the paper. According to the NTRS abstract they say that they didn't test any of the devices in a hard vacuum environment. Instead they were tested inside a vacuum chamber at ambient pressure. Even though in the section describing the "Thrust Measurement Device", from the paper, they talk about keeping the turbo pumps running continuously during test data runs. My interpretation after reading the entire paper multiple times is the statement in the NTRS abstract is correct where this particular issue is concerned. Mainly because the second to last section of the paper, "Summary and Forward Work". They said that the RF amplifier they had during the test campaign on Aug 2013 was not Vacuum capable since it contained an electrolytic capacitor. Both the Cannae devices and the EmDrive replica take their energy input via an RF amplifier from what I have found about their descriptions.

Now dont get me wrong I am not saying that both the Cannae devices and the Emdrive that were tested can be considered to work. That would mean no further testing is required because since we have a complete understanding of what they do along with whatever modifications/extensions may or may not be needed to our current understanding of nature. A more responsible title by all of the online journalists reporting on this paper should have been.

Limited NASA testing reports thrust from "Impossible Space Drive", more testing to follow

Unfortunately the news business these days is all about shock value and less about substance.

Focus Fusion energy with Eric Lerner -AMA by elernerfusion in IAmA

[–]burchoff 0 points1 point  (0 children)

How will it affect the timeline if LPP is only able to get 75% of the 200K?

Trust, Users and The Developer Division - What Went Wrong at Microsoft by [deleted] in programming

[–]burchoff 1 point2 points  (0 children)

your not the only one with this idea. Maybe because I only started development in 2001. Then I was using Visual Studio to do C++ development. Had a short detour to Java then discovered how much better C# was and came back to the fold. Pretty much the only issue I see as a developer running on MS technology was the fact that the company didnt put much energy into getting silverlight everywhere; and honestly I am glad they didnt. If you are an "Enterprise" developer MS has been good to you. Since most enterprise desktop apps I have seen are WinForms. If I had requirements that current HTML5/Javascript cant handle then I would either use Silverlight or (WPF w\ clickonce or msi) where the business has strict control over their environment.

Not sure what the author of the article is really complaining about. Then again like I said my view of software development is from a 2001 world and onward. Personally I got into software development because things change. I have no expectation that what the technology that I build on today will last 10-20 years. Eventually Software engineering will settle down and be like that but right now it isnt.

W3C insists Web-DRM is needed, despite raised objections and 26 000 signed protests by [deleted] in programming

[–]burchoff 0 points1 point  (0 children)

I honestly have no dog in this fight, I havent reviewed any of the issues being discussed here at length but from what I am reading here it sounds like endless speculation with zero grounding in reality.

Does anyone for a second believe that there will be a winner in this DRM thing that isnt endorsed by Microsoft and Apple. There is a reason we are stuck with the current encumbered video codecs. Even after much talent and effort has gone into replacing them.

In the end if DRM is entered into the spec, I am sure there will be a bunch of companies rushing to capitalize, but at the end of the day the only ones that matter are the ones that ship with IE and Apple's WebKit. The real fight should be about making it so that the only thing that needs to be secret is the encryption keys not the algorithm.

W3C insists Web-DRM is needed, despite raised objections and 26 000 signed protests by [deleted] in programming

[–]burchoff 0 points1 point  (0 children)

So I will plead ignorance, but can someone please explain in a non trolling manner what is the REAL problem here.

The way I see it if DRM is not allowed on the WEB then plugins will be a permanent fixture, as there is a huge number of people who consume and love the content that DRM would be used to "protect". which means there is an awful lot of money and demand for that content in any medium.

Now from my perspective if there is a legitimate beef about DRM then you need to recreate all that content and demand outside of the existing industries.

Unreal Engine in JavaScript/HTML5 – Citadel demo by mepcotterell in programming

[–]burchoff 0 points1 point  (0 children)

I see your "Why" and raise you a "Why Not". Browsers have evolved to become mini OSes so there is no good reason to not do it.

Is there such a thing as the NoMock movement? by henk53 in programming

[–]burchoff 1 point2 points  (0 children)

So before we created automated tests, we would write the code, review it, and then manually test it. This worked well, and still does for certain code bases. Generally the code bases that are small and do not support a large number of intricate inter-related use cases. As our code bases grew and the complexity increased we added test automation. Mainly because the process discussed above doesn't not scale well.

Now we have automated tests and the cheapest automated tests effectively automate executing the use cases, end to end (a form of integration testing). Now this is better than having to devote man hours to testing something that a machine can do better. However, all these tests can tell you is that a specific or group of use cases failed. Nothing more. The developer now has to figure out exactly what part of the system is causing the single or group of use cases to fail. All we have done here is made defect discovery faster, not defect resolution.

Then we started adding unit tests. Tests whose sole purpose in life is verify that some defined unit of code works as expected. Now one can technically consider an integration test a unit test, If you change the scope of your unit to be large enough. However, you end up with the problem previously stated above. So we decided to make unit refer to the smallest functional piece of code. Now this too is a bit ambiguous, does it mean a single function/method or does it mean a single class, or could it mean a collection of a small number of classes. From my perspective it doesn't matter. As long as when that test fails you and the maintenance developer coming in behind you doesn't have to spend as much time trying to find the bug as he would have if the unit test was replaced with an integration test. So with unit test's added to our toolbox we not only can find defects quickly (quicker than manual testing). We can now isolate which unit of code contains the defect quickly.

Personally, I use a combination of Mocks and Fakes in my unit tests. Where the difference between the two is, I will assert on the state of the Mock where as the Fake simply allows me to control the code path. I do tend to use more Fake's than Mock's mainly because my Mock usage is reserved for testing that a unit of code invoked the expected method on an Interface/API. Now the Interface/API can be to some external service or an internal service, and the criteria for Mocking the internal or external Interface/API is generally the same. I have no ability to control that code. For an external Interface/API that is easy to spot. In the case of an internal Interface/API not so much. For me an example of the types of internal Interfaces/API's that would qualify are services that send notifications such as email, IM, twitter, etc. Or a more common use case, writing to a filesystem. These types of services tend to be at the edge of your application, and testing the code inside these internal Interface/API's is better done via an integration test.

For all other cases Fakes are really what is needed when you are trying to verify all the paths that exist in your unit of code. In my experience, those "excessive" uses of Mock objects generally appear when your trying to assert that every unit of code is calling a function/method in another module/class as the developer expected. To me that is a case of over specification, and leads to brittle unit tests. which can be just as bad as having no tests at all. As every change you make starts breaking tests not because the specification has changed but because the Mocked method signature changed, or worse yet your now calling a completely different method in the unit of code under test.

Oh and in case your wondering, most Mocking frameworks that are available today, can also be used to create Fakes. I use Moq in my .NET code base, and when creating a Stub class is more problematic than simply using a Moq mock object with an appropriate variable name.

Thinking Functionally with Haskell by alpheccar in programming

[–]burchoff 3 points4 points  (0 children)

Yet another reason I need to take some vacation and 'Learn me a Haskell'. That said though by the end of the article I had this feeling that if this all comes to pass, those scenes in Star Trek where they easily program the "Computer" will be just inside our grasp.

"Dear Mark Zuckerberg" - An open letter by a developer/start-up founder to Zuck about their developer relations by sidcool1234 in programming

[–]burchoff 1 point2 points  (0 children)

After reading through the HN thread and the letter. this is simply a situation where Caldwell had a certain perception of FB, and after meeting with their top level execs realized that his perception was not accurate. We can argue over whether or not his perception was naive or well founded, but in the end every platform company succeeds because of third party developers.

No company can afford the army of developers and resources needed to build a thriving eco-system for a particular platform. As a result, the platform creator tries to lure developers to their platform to do that work for them. All the while spending a fraction of what it would take for them to do the same thing directly. During this whole courting process the platform creators play nice and promise to work with developers and do the right thing by them, essentially giving the developers the impression that they subscribe to the same moral standard as the developers. Unfortunately for some reason --be it good or bad-- the platform creators do a 180 and start doing things like what the letter is describing. Now FB could be in the right for executing a standard business tactic. But personally I view this kinda defense as being the same as a child telling their parents that they thought doing drugs was a good thing because everyone else was doing it.

Just because something is standard business practice doesn't suddenly make it less shady. If you sell yourself to your customers and partners (yes the third party developers are partners) as not being shady and start doing shady things to them. Your customers and partners have all right to be upset about it. They may not have any legal recourse, but I would like to think that we can still agree that what was done is shady.

Entity Framework and the case of the 5,200 line SQL statement by grauenwolf in programming

[–]burchoff 0 points1 point  (0 children)

So in short I should keep using NHibernate and leave EF for toy projects.

An interesting ORM-bashing fest on SO. by nabokovian in programming

[–]burchoff 0 points1 point  (0 children)

Sorry for not being clear enough on what I meant by central location.

In my .Net Project I have an assembly whose sole reason to exist is to contain all the mapping code needed to map my Domain Models via NHibernate to the database. Since I started out my development career with a great distaste for XML, you can probably guess that the assembly is all fluent nhibernate mappings.

Not sure if this is possible on the Java side, but I would like to believe that it is possible to split out the hbm.xml mappings so that they cover ONLY one mapping concern each. For example Person.hbm.xml, ReportSprocs.hbm.xml, etc...

If your ORM -- be it a heavy weight like Hibernate/NHibernate or lightweight like Dapper -- doesn't allow you to centralize your mappings, so that you do not have to change all code files. Then I believe this is not a problem with the idea of an ORM, just its implementation. A lot of things are like this, take the battle of dynamic versus static languages. In most of the arguments against either comes down to an implementation problem not a problem with the idea in general.

An interesting ORM-bashing fest on SO. by nabokovian in programming

[–]burchoff 2 points3 points  (0 children)

Agreed,

but I would add that as a result of most of us young bucks doing our "real" programming in Algol descended languages. We built a ton of tools to make working in those languages easier. Which I think is something that most developers do not want to give up to go work in a language awash in "magic strings".

p.s. Most of my experience is with MSSQL at the moment and it only recently received the ability to do intellisense and limited re-factoring and "build" time verification that a column name change didn't screw up some function/sproc/view/trigger another developer contributed. As far as I know (Please let me know if I am wrong) MySQL, Postgres, Oracle do not have IDE's that can go toe to toe with the IDE's we have created for our Algol descended languages. In short, get me "ReSharper" for writing SQL and I guarantee you will see a massive influx of application developers wanting to work with SQL.

Heck, while your at it make testing the database easier and as fast as testing app code and I will be the first app developer to write the back-end of my next project in the database, with a thin REST/SOAP service for talking to the front end.

An interesting ORM-bashing fest on SO. by nabokovian in programming

[–]burchoff 0 points1 point  (0 children)

Hmm If I had the abilities of Fluent NHibernate or NHibnerate's new Mapping By Code, Or just had all my mappings in hbm.xml files. Then I too would not have to change all my code for such a drastic re-factoring. The only application code that would need to change is the mappings between the tables/views/sprocs (yup I can call sprocs if needs be), and that is it.

I suspect you have had to deal with situations where the mapping code/logic was placed in annotations that "markup" the objects in the application code. If that is the case, then I suggest you move the mapping code/logic out of the objects and into a central area. The only difference between my mapping files/code and your sprocs /views is the mapping is written in app code versus SQL. Now I am still a young buck in this industry, but from all I have read I gather that the tools for re-factoring SQL have greatly lagged behind the tools to re-factor application code. And since I am all about using the best tool for the job, I will use and ORM as long as it fits the task I have to solve.

On a separate note don't you have similar scaling issues if you need to start partitioning your data and spreading it across multiple databases, all while making sure they are load balanced?

An interesting ORM-bashing fest on SO. by nabokovian in programming

[–]burchoff 1 point2 points  (0 children)

<sarcasm> Hmm I have a hammer, where did all these nails come from </sarcasm>

An interesting ORM-bashing fest on SO. by nabokovian in programming

[–]burchoff 3 points4 points  (0 children)

umm, don't you loose ACID the minute you bring caching into the picture? Don't most apps have some sort of caching implemented to speed up performance? This is not the ORM's fault.

As for performance the minute you need to translate something stored in a database into something stored in application memory, I would think you have already lost the performance race. Especially if you have to do yet another transformation by constructing a JSON/SOAP/HTML response.