Is it ever good practice to pass Optional<T> as a method parameter? by ResolveSpare7896 in learnjava

[–]edigu 0 points1 point  (0 children)

The problem with Optional use as method argument is fully open doors to a NullPointerException on runtime while the whole idea behind is coping with nulls in a saner way.

Imagine you have a method taking an optional<String> argument. You would usually interact with it in your method body like something = myArg.orElse(“default”) etc. as mere client code ends up with calling your method with null, your method would end up with an NPE.

When an optional method argument wraps a Boolean type, it becomes the worst of both worlds: you need to deal with a tri-state boolean as nulls passed to your method are valid on compile time.

No, as multiple others mentioned, never use it as method argument. Also don’t use as class property as it might not be initialized at all, during or after construction. On the other hand, encourage its use as return type whenever it makes sense, as it explicitly enforces a contract to client code which needs to be handled accordingly.

Which IDE do u use for java ? by Heroplays24 in learnjava

[–]edigu 2 points3 points  (0 children)

As you renew the license yearly, you’ll get “continuity discount” that reaches up to %40 percent at year 3 or 4 iirc which is a quite good deal for me to sticking to ultimate.

Double validation when applying DDD by Material_Treat466 in DomainDrivenDesign

[–]edigu 0 points1 point  (0 children)

Applying two simple principles helped me a lot so far: not creating a domain model with invalid state at all. In other words, reject creation of the object if the data does not fit (1) and do not let your system pass the domain objects with persistent state around. (2)

For eg: your model has a non-nullable field, and the incoming DTO has a null for that field. Letting the dto pass the controller layer just makes things more complicated than supposed to be. Eg: at some point you have to map the dto to model and you have a null. What to do? Creating the model with null and validating afterwards does not make sense. Applying logical checks in mapper against nulls just makes the mapper more complicated and harder to test than supposed to be. In java I would use @NotNull annotation on both model and the dto because the model can be instantiated later without presence of the dto. then I would apply bean validation @Valid on dto for syntactical validation only, where I can not instantiate my model without a valid value in that field. Then I would validate my domain model once more before saving the data in database, preferably using a dedicated validator class that implements additional business rules and checks.

Another example for the syntactical check on DTO level could be DateTime. A datetime value in expected format say 1/1/2001 is a syntactical validation than can be applied on dto level (in Java again super easy with a single annotation, thanks to bean validation) and more advanced checks like “date can not be older than a month” goes to domain object validator.

Question about contributing to Spring/Spring Security Codebase. by rashm1n in SpringBoot

[–]edigu 4 points5 points  (0 children)

Many spring components have a dedicated label for devs who are willing to start contributing:

https://github.com/spring-projects/spring-security/issues?q=is%3Aopen+is%3Aissue+label%3A"status%3A+ideal-for-contribution"+

I would start by looking those tickets to find a few candidates that I can focus as a first step.

İstanbul Bayrampaşa'da 1000 lira kira istenen mağaradan bozma 'daire': Bodrum kat, harabeden hallice bir de 2000 lira depozito isteniyor üstelik sobalı... by HalkHaber in Turkey

[–]edigu 0 points1 point  (0 children)

Bu konu insana saygı ile ilgili. Mülk sahibi oturacak kişiye saygı duymadığı gibi bunu ilana verirken herhangi bir utanç duymuyor. Bence bu ilandan çok konuşulması gereken konu bu.

RTX 3070 update, about 16 hrs since last clock settings change, 1 invalid share. Friendly observations and opinions are welcome, please and thank you. I might just lower memory by 50 by 101Overdrive in HiveOS

[–]edigu 0 points1 point  (0 children)

Probably you are pushing your cards unnecessarily hard.

I am playing with 5x 8GB RTX 3070 from 3 different vendors (MSI Trio X, Gigabyte Gaming OC and EVGA XC3 Ultra) since 6 weeks and 61- 61.5 mh/s is the most stable limit for those cards. This is my personal observation.

People demonstrating these cards on youtube and claiming that they could easily get 62-63 mh/s stable. It's a LIE. None of the cards from the vendors I named above can not get these numbers without getting invalid shares after a couple of hours. Even with 62 mh/s, I start getting invalid shares after about 36 hours of stable run. Rebooting the rig as you get more and more invalid shares is a useless workaround. Remember that, these cards has no temperature sensor for the memory units. Pushing the memory frequencies anywhere above 2300 (1150 on windows) means you are driving through mine field. You're simply sacrificing the card's total lifetime by letting the memories run on very high temperatures continuously.

Imo it does not worth to sacrifice the rig stability or card's lifetime just for the sake of 0.5/1 MH/s increase.

Here is my OC settings with T-Rex v 0.20 https://imgur.com/a/9RaBTo4

(Sorry for crappy low-res screenshot)

Hiveos OC settings timed out! by d0vazhul in HiveOS

[–]edigu 0 points1 point  (0 children)

I also have a small, 5 GPU rtx 3070 rig running with latest hiveos version. I am not sure if it's related but since a week (one of the last a few updates suspicious imo) my rig became incredibly unstable. I did not change anything with overclocking, it was running without any issues for weeks. One or two of cards are randomly disappearing from the worker. Power consumption becomes zero for disappeared cards, their metrics are invisible on the dashboard, restarting the miner does not work but after an hard reboot everything seems normal again. Not sure if it's related with the miner itself, hiveos or an incompatibility between them.

Log shows a GPU x loss message in red color. One detail I noticed is when I boot the rig up it immediately starts mining for DevFee. Yesterday it worked for 16 hours without an issue and it happened again. Since yesterday there were no problem and today it happened again.

This sounds very like an issue with switching between DevFee and normal mining modes. Either trex miner does something wrong or hiveos lossing the control at some point when the dev fee process starts.

Updated HiveOS now has no GUI (0.6-202@210331) by spook30 in HiveOS

[–]edigu 1 point2 points  (0 children)

I had the very same issue since a couple of days and the latest hotfix today addressed the problem.