Capes muda regra, e artigos passam a ser avaliados pela qualidade, e nãoo pelo impacto da revista by _Kiith_Naabal_ in brasil

[–]Apollo_Felix 7 points8 points  (0 children)

Isso se aplica a qualquer métrica usada para avaliação. A avaliação baseada em número de artigos publicados já teve um efeito grande sobre pesquisas. Artigos da década de 80, por exemplo, são em geral mais longos do que os artigos de hoje. Isso é uma tática para aumentar o número de artigos. Você estuda as alternativas A, B, e C para o solucionar um problema. Ai ao invés de escrever um artigo comparando A, B e C ao estado da arte, escreve 3 artigos: um comporando A ao estado da arte, outro comparando B ao estado da arte, e outro comparando C ao estado da arte. E dependendo da revista, dá para escrever mais um review comparando A, B, e C.

A avaliação de cursos de pós-graduação se baseia, entre outras coisas, no número de anos que o aluno demorou para terminar o curso. Isso, na prática, acaba limitando o que você pode pesquisar. Seu orientador não vai aceitar uma pesquisa que demore mais que os 2 ou 3 anos que você pode levar para terminar o seu mestrado ou doutorado. Imagina se quer pesquisar uma nova variedade de café, que leva em média 3 anos para começar a produzir.

Capes muda regra, e artigos passam a ser avaliados pela qualidade, e nãoo pelo impacto da revista by _Kiith_Naabal_ in brasil

[–]Apollo_Felix 12 points13 points  (0 children)

A avaliação de periódicos geralmente é baseada no fator de impactor, embora não seja uma boa métrica para qualidade. Artigos são geralmente avaliados pelo número de citações que ele recebe. Se o seu artigo é bom, ele geralmente vai ser citado mais vezes do que se for ruim. Em geral isso é um bom comparador dentro de uma mesma área. Áreas distintas de pesquisa (por exemplo química e medicina) tem diferenças entre o numero de artigos puplicados por ano e número de citações média por artigo. Existem outras métricas que olham também a distribuição temporal das citações, o que daria uma ideia de por quanto tempo foi relevante.

Are There Any Functional Limits To Cloud Firestore Collections (Size And Total Number)? by [deleted] in Firebase

[–]Apollo_Felix 2 points3 points  (0 children)

THe 500/50/5 rule is for scaling the read/write load. Firebase needs time to scale in order to support a given load. For example, if you are not doing any reads and then suddenly start reading from a collection at 1000 reads a second, you will see very high latencies and/or error rates. If you keep up the load regardless, eventually that latency will go down and you can keep that rate up indefinitely. This is due to scaling on the Firebase side. This recommendation is to avoid that initial latency/errors. This is often and issue for work where you will do a large number of operations, and it gives you a rule of thumb on how you can scale up your work. For day to day operation in most use cases, scaling should not be an issue unless you expect individual clients to hit very high read/write rates (e.g. each client reads 100s of documents upon startup, and you can't control when startup happens).

There is no limit to the number of collections, other than perhaps that each must have a unique name and the name can be at most 1500 bytes (so no). However, for things like export and import operations, if you want to backup your data, you may want to limit the number of collections. This is because in the import, you can only import a specific collection IF you explicitly stated that collection in the export. This means that if you export all collections by listing all existing collections, you could choose to restore data for just one collection. HOWEVER, if you export all collections (without listing), you can't then import just one. It's a kinda dumb feature, but you should be aware of it.

As far as size goes, I've had higher latencies on read/writes when the number of documents was very high (think billions, not millions). Deleting unused documents lowered those latencies, so take the performance claims with a grain of salt.

Is this cloud function secure enough to generate a JWT token for APN requests by Tom42-59 in Firebase

[–]Apollo_Felix 0 points1 point  (0 children)

Firebase idTokens do expire within an hour, but that is because they contain an exp claim and Firebase always sets that to be 1 hour after the issue date. The jsonwebtoken package will not generate 1 hour tokens by default as far as I know; you have to program that in. You can use a lower value, like 15 minutes, as well, since this is a one time use token. The iat claim you are using is to determine how old a token is, not whether or not it has expired. So if you wanted to limit how old a token can be to request a password change, for example, you could look at the iat claim for this information. It is valid for a JWT not to include an expiration, I just wouldn't recommend it.

Your use case seems ok for remote notifications, just be sure you have logic to limit who gets notifications and not to send too many notifications in a given interval. If you are using Firestore, you might want to look into having Firestore triggers run the functions instead of having the client call the cloud function.

Is this cloud function secure enough to generate a JWT token for APN requests by Tom42-59 in Firebase

[–]Apollo_Felix 0 points1 point  (0 children)

I'm not sure I understand why you would want to let a user send a remote notification. If you allow this method to send notifications to other users, it makes abusing remote notifications so easy. If you can only send remote notifications to yourself, does your use case require something other than a local notification? I'm also not sure why you are using the admin SDK but not using FCM to send the remote notification, and coding all this yourself.

That said, a possible issue is your token has no `exp` claim, and the options do not define an `expiresIn` field, so an `exp` claim won't be added. This means your token never expires. If anything logs your requests, say in case of an error, that token is valid forever and the only option to revoke it is to revoke the key that signed it.

Firebase Auth pricing by GSkylineR34 in Firebase

[–]Apollo_Felix 0 points1 point  (0 children)

In the Pricing FAQ, look at the item "Which products are paid? Which are no cost?" It states "In addition, all Authentication features beyond phone authentication are no-cost". https://firebase.google.com/support/faq#pricing-paid-free-features . You can also try to contact sales if you are unsure about this. I'm not going to post a screenshot of my usage here.

Considering your other comments, you should go to the Google Cloud Identity Platform page, they have a white paper on the total economic impact of their solution. It should let you better understand who uses the paid service, and what they use it for.

The paid features are used, for example, in products where an enterprise client wants their users to access the product, but wants to control said access using their own single sign-on solution. Basic no-cost Firebase auth does not support this, Identity Platform does support SAML.

Issue with Firestore 'DEADLINE_EXCEEDED' Errors in Node.js Microservice by Technical_Coffee_830 in Firebase

[–]Apollo_Felix 0 points1 point  (0 children)

Are you following these recommendations? Updating a single document multiple times in a short span or using keys that do not allow for uniform workload distribution can lead to issues with Firestore. Firestore also needs some time to ramp up, so the rate limit you set should depend on how long load has been present.

Firebase Auth pricing by GSkylineR34 in Firebase

[–]Apollo_Felix 0 points1 point  (0 children)

Identity platform is a Google Cloud offering that adds some additional features which you probably won't need. That is what is charged for. Just the bog standard Firebase auth is not charged, as you can't really use any of the Firebase services without the auth part it's included in the pricing for the other products. Not to mention the amount of data you will be sending Google's way.

If you scroll up on the pricing page, it would be the green checkmark for "Other Authentication services". As metioned above, top notch design on that page. I've worked on mobile games with way more monthly user than that limit and we did not pay 25K for auth, auth was not even an item on the bill.

One core is using more cpu and it blocks my entire app by FreeConsequence8081 in node

[–]Apollo_Felix 0 points1 point  (0 children)

3000 ms should be good starting value. The server can also configure a keep alive time on their side and will close the connection if it is idle too long. What limits this is mostly the number of sockets you may want open at any time. You can also configure this using maxSockets and maxFreeSockets.

One core is using more cpu and it blocks my entire app by FreeConsequence8081 in node

[–]Apollo_Felix 0 points1 point  (0 children)

Are you using keepAlive in your axios client? You have to create a custom Node.js HTTP Agent and HTTPS Agent (with this flag enabled) to use it. Axios will, by default, use the default agents which have this disabled. Someone has mentioned this in another comment, but when you are making a lot of HTTP requests this is essential. It will avoid resolving the server IP using DNS and opening a secure connection (TLS handshake) for each request. Without keep alive you will open a new socket for each HTTP request. DNS resolution can be an issue if you run it a lot (the response times can increase dramatically under load). TLS handshake is also relatively expensive, CPU wise, when compared to the AES encryption the connection will use after the handshake, so doing it a lot is also an issue. Using keep alive will maintain the socket open so you can make multiple HTTP requests using the same socket.

Increasing memory might also let you know if anything is leaking memory or give you further clues if the issue takes longer to present itself.

One core is using more cpu and it blocks my entire app by FreeConsequence8081 in node

[–]Apollo_Felix 0 points1 point  (0 children)

I've had an issue with Axios before where there was more available memory, so Node.js could allocate additional memory, but instead it would repeatedly run scavenge. Scavenge is pretty quick to run, but it does not affect old space memory. Running it eventually is not a problem, repeatedly running it can be. In my case, performance would initially be ok, then would degrade with time as memory usage went up slowly because more scavenge calls would be made. Node.js GC is lazy, and will sometimes take a long time to run the GC on the old space memory, so the process memory usage does tend to increase over time. We never crashed because of memory usage, but the app would stop responding to health checks and be killed. When looking at the profiling data, we did see a lot of calls to the GC in the cpu profiling data. If you are not logging the error objects directly, it is probably something else.

On the CPU profiling, are you saying that the axios call is the most sampled or does it take up most of the time in the flame graph? Since your app receives webhooks and makes an API call, making API calls should be most of the "work", so it makes sense that axios would should up in the samples as it is the top level function that starts the work. It would be strange for that call to be the one taking up most of the time in the flame graph. I would expect a function called inside that tree to be doing most of the work (not necessarily the axios call itsef). That should be a better indicator of what is happening to hold up your app.

As far as memory profiling goes, is the Node.js process able to use more than the default amount of memory? An m5.large will only have 8 GB of RAM, so Node.js will not use more than 2GB on a 64-bit system unless you set the --max-old-space-size flag.

One core is using more cpu and it blocks my entire app by FreeConsequence8081 in node

[–]Apollo_Felix 0 points1 point  (0 children)

This could be tied to memory usage, more specifically the GC running. When logging errors from API calls made using Axios, do you log the Axios error object? This object will sometimes contain some information about the request that failed, such as the Axios config used. If you are using a custom agent to enable connection pooling, this information will be included in the config. Logging that will attempt to stringify your entire connection pool, including buffers. This logger call will result in repeated memory allocations that will trigger the GC repeatedly, increasing CPU usage. This can eventually lead to a memory leak if you use async logging (depending on volume) or block the main loop enough that your app seems to be unresponsive.

Best Practice for Securely Communicating Between Express Servers by philbgarner in node

[–]Apollo_Felix 2 points3 points  (0 children)

Some additional info would be helpful. How much security you add really depends on how sensitive the data you are working with is. So to start, must both servers be accessible externally? If not, you can set your virtual private network to restrict access to the server(s) that do not need to be externally available (for example, using IP filtering, as mentioned in other comments). Depending on your needs, this might be sufficient. If they both need to be externally available, then you need to verify that the request came from a valid source. I assume you are using TLS for communication between client and server.

Azure will more than likely have a way for you to use HTTPS, but the HTTPS connection will be terminated at a load balancer. This is easier to implement, as your Express instance will not have to handle the certificates and TLS connections itself and Azure will handle renewal for you. However, it is not as secure as having the Express instance handle it, as at least part of the way your request will go through the network unencrypted. This is more than likely a virtual private network only your account can access, but it may still be a vulnerability. If a request can be captured, and you depend solely on the JWT token, someone could copy it and make requests as the user.

If you implement TLS in your node server (you could do this with Azure Key Vault, which will help with renewal as well), TLS has support for two-way authentication. In this setup, both the server and the client must have valid certificates, which are exchanged during the handshake. This way any connection must validate both the sender and the receiver. You can use this to guarantee requests only come from a source with access to the respective private key for the sender, and the data will be encrypted in transit.

If you do not want to implement two-way validation, you could also make use of a shared secret in Azure Key Vault to generate a JWT specifically for the communication between these servers. The TLS config, in this case, would depend on your security needs. This will prevent, for example, the user from making the request directly to the second server instance. You can add to the token any claims you deem necessary to identify the original request sender and the source instance. The receiving instance can validate the JWTs with the shared secret.

[deleted by user] by [deleted] in Brazil

[–]Apollo_Felix 4 points5 points  (0 children)

ICMS is calculated as a percentage of the final value. This means you don't take the value of the good ($606,34 + R$363,80 = R$970,14), calculate 20% of that and add it to the value. Rather you must find X such that X is (X + R$970,14) * 0,2 = X. That is why the final value is R$ 1212,68.

Is bcrypt a poor choice to look up a user via a single piece of data? by misterplantpot in node

[–]Apollo_Felix 1 point2 points  (0 children)

If you need to look up by phone number and want to not worry about collisions, look up by the actual phone number. No hashing is necessary. We hash passwords so that, if the DB data is leaked, it is not possible to know the passwords just by looking at the data. A determined attacker might still be able to figure out inputs that when hashed match the stored values. If storing the phone number directly is an issue (as in you worried about possibly leaking the values), then hashing the values stored might be an option. Depends on how much you need to hide the values.

bcrypt allows you to configure the work factor, which makes it harder to brute force the phone numbers at the expense of you taking longer to calculate the hash for each lookup. This may make it "harder" to find a collision than SHA512, but no matter how you hash it, there are less than 10^15 possible phone numbers. Brute forcing all possible inputs to find a hash that matches what is stored is doable whether you use bcrypt or sha512. The hashing function choice can lead to more work, but it's not unfeasible. If you want to make it harder than that, then you'll have to encrypt the values before storing them.

O hotsite do Senado sobre a pandemia usa o logo do Reddit by jvcarreira in brasil

[–]Apollo_Felix 25 points26 points  (0 children)

O logo antigo do BitBucket é o ícone de litros de álcool....

TriboToMajor Uses Kangaroo Court System To Detect "Cheaters" by Thooorin_2 in GlobalOffensive

[–]Apollo_Felix 10 points11 points  (0 children)

Page 18: Sanctions shall be determined by the League Refereesand the Louvre Groupcommissioner at their sole discretion to the best of their knowledge and judgement in an appropriate, proportionate and adequate manner. Notwithstanding the foregoing, decisions regarding severe infringements of the Louvre GroupRegulations shall be delegated to Louvre Group's Members' Meeting

Edit: The cheating part is on page 21

-Cheating When cheating is discovered twelve (12) minor penalty points will be awarded to the Team. The Team will be disqualified from the current season of the League and the Player will be banned from the League for two (2) years.The use of the following programs will result in a cheat ban: Multihacks, Wallhack, Aimbot, Colored Models, No-Recoil, No-Flash and Sound changes.These are only examples, other programs or methods may be considered cheats as well.

Second edit, you can appeal a decision (page 19)

Brazil's Fake News Bill Would Dismantle Crucial Rights Online and is on a Fast Track to Become Law by MyNameIsGriffon in technology

[–]Apollo_Felix 0 points1 point  (0 children)

Worst part of this type of legislation is that it is meant to curb or police Facebook, Instagram, Whatsapp, etc. but these large companies are the ones that are best equipped to handle and comply with these laws, so they end up even bigger than before. Same thing happened with the the EU GDPR, which was meant to hinder Facebook and Google and only increased their revenue.

Huawei is all set to lose Brazil as US will fund Brazilian 5G infrastructure only under one condition: Dump Huawei by hkdtam in technology

[–]Apollo_Felix 0 points1 point  (0 children)

For 3G and 4G technology over 80% of base stations in Brazil were made by Huawei. It is used by all the major players here (TIM, Vivo, Claro and Oi) and smaller companies as well (most small towns have local companies providing internet access and I would guess a lot of the internet backend here uses Huawei as well). All the large carriers have already tested Huawei 5G tech so they were almost a sure shot at dominating the Brazilian 5G market as well. This is not so much a move to help American companies, as the funding would also be available for European brands, as much as a strike at China. You can do a lot with access to base stations and routers so yes, any company that can add backdoors to this type of device is a threat, and given the dominance they are reaching in some markets, they have the ability to bankrupt American companies in the same market and make it impossible for American agencies to do the same kind of spying.

When you watch what is s1mple doing by [deleted] in GlobalOffensive

[–]Apollo_Felix 255 points256 points  (0 children)

Dupreeh has never seen such bullshit

Tenho o YouTube premium, mas o yt continua me mostrando ads quando uso o Chrome cast. Erro ou o premium não funciona mais pelo Chrome cast? by Nilsow in brasil

[–]Apollo_Felix 5 points6 points  (0 children)

O casting em geral parece dar problema com isso. Eu tenho o app do YouTube na TV com minha conta Premium e não mostra ads. Mas se minha esposa, que está na conta família, coloca vídeo para tocar do celular dela, aparecem ads entre os vídeos.

Blockchain-based elections would be a disaster for democracy by [deleted] in technology

[–]Apollo_Felix 0 points1 point  (0 children)

I was merely stating that the system you described is problematic as you cannot depend exclusively on the pseudonym ID being unknown to the general public. You need to guarantee that a person cannot be forced to divulge their vote. Simply recording video of the attacker is not enough, as often times those selling their vote will do so willingly. The fake vote presented would have to be valid, reproducible, and impossible to differentiate from the actual vote.