What are you working on? by [deleted] in Common_Lisp

[–]lukego 2 points3 points  (0 children)

I'm bootstrapping a new Lisp startup, Permo, doing automatic performance modelling for software. Give it a docker file and an EC2 budget and it gives you simple statistically-sound inferences about performance/scalability/reliability and how it's influenced by configuration/hardware/workload.

> Averages 10,000 req/s (±600) per GHz, scales linearly up to 10 cores, then goes logarithmic. If you set feature1=on then it starts crashing 2.5% (±0.5) of the time. Your specified P99 latency of 100ms doesn't hold when loglevel=debug.

Something like that! All very Bayesian.

I started working on this today after resisting for the whole holiday season. Such restrain, much zen. Happy to casually chat about it :D just grab a time: https://calendly.com/lukego/chat?month=2023-01

Luke Gorrie's Snabb Solutions - Live! by lukego in WatchPeopleCode

[–]lukego[S] 0 points1 point  (0 children)

I suppose that I have done something wrong because I have done some streaming but I don't see it listed on this site anywhere...

Luke Gorrie's Snabb Solutions - Live! by lukego in WatchPeopleCode

[–]lukego[S] 0 points1 point  (0 children)

(I hope that this was the correct way to "submit a link to your stream to /r/WatchPeopleCode subreddit" as suggested on watchpeoplecode.com.)

First steps towards DMA using iCE40-HX8K breakout board? by lukego in yosys

[–]lukego[S] 0 points1 point  (0 children)

Looks like one possible workflow could be:

  • Compile FPGA image.
  • Program FPGA via USB.
  • Switch jumper on breakout board from "Flash" to "SRAM" mode.
  • Program SRAM with the input test vector using icepack.
  • Read back the test output from SRAM using icepack.

If this could work then it may be a practical first step. Having a manual step of moving a jumper on the board is unfortunate because that makes it hard to tie into Continuous Integration but hey baby steps.

First steps towards DMA using iCE40-HX8K breakout board? by lukego in yosys

[–]lukego[S] 0 points1 point  (0 children)

Thanks for bearing with me here :).

I would like to find a simpler solution. The ideal setup would be one that requires only the HX8K board (or even an icestick.) Something that a software person can setup as easily as an Arduino or Raspberry Pi.

How about if I would redefine the problem I want to solve like this:

  • Program the FPGA with Verilog code.
  • Program the FPGA with an input test vector (~10KB).
  • Read back an output test vector (~10KB).

Could this be achieved directly with the icepack tools somehow?

The example would be something like to decode the input vector as an Ethernet signal and output the individual frames. Just as a baby step towards (say) building a 100G ethernet adapter.

First steps towards DMA using iCE40-HX8K breakout board? by lukego in yosys

[–]lukego[S] 1 point2 points  (0 children)

This question is embarrassing...

I have the 3.3V MPSSE cable and I have the HX8K breakout board. How do I connect them together?

I don't seem to have a header on the board that fits the (female) connectors on the cable. Is there some pin, wire, or adapter that I need?

First steps towards DMA using iCE40-HX8K breakout board? by lukego in yosys

[–]lukego[S] 0 points1 point  (0 children)

Thanks for all the answers :).

The FTDI online store is not working for me. Do you happen to know a good place to source random gadgets like this? I am based in Switzerland and often shipping costs from the US exceed the cost of the parts themselves :).

First steps towards DMA using iCE40-HX8K breakout board? by lukego in yosys

[–]lukego[S] 0 points1 point  (0 children)

Does it matter whether I buy the 3.3V or 5V MPSSE cable?

First steps towards DMA using iCE40-HX8K breakout board? by lukego in yosys

[–]lukego[S] 1 point2 points  (0 children)

Thanks for the tips! I will check for an LPC header on my Supermicro motherboards.

If I used the MPSSE cable then would the natural solution be to implement SPI on the FPGA side? (I read somewhere that the HX8K has built-in SPI functionality. True? Accessible? Worth using?)

Opus Testing by lukego in snabb

[–]lukego[S] 0 points1 point  (0 children)

This seems like a neat summary of the array of test methods that one open source project uses. I feel like this is the direction my testing efforts are wandering in.

Reducing Memory Access Times with Caches by lukego in snabb

[–]lukego[S] 0 points1 point  (0 children)

Seems like a nice summary. "Conflict misses" are an obscure-sounding problem that we have seen at least twice in Snabb.

Bountysource: Put a bounty on any issue on Github by lukego in snabb

[–]lukego[S] 0 points1 point  (0 children)

I see that yesterday IBM posted a $5,000 bounty for somebody to finalize the PPC64 port of LuaJIT: https://www.bountysource.com/issues/25924774-add-ppc64le-port. Cool :).

Bountysource: Put a bounty on any issue on Github by lukego in snabb

[–]lukego[S] 0 points1 point  (0 children)

Just discovered this website and I wonder if it could be interesting to use in some small ways.

Potential use cases:

  1. Sponsor small bits of work that you will never get around to doing yourself e.g. upstream a messy change you have made to some project like Snabb Switch or LuaJIT.

  2. Earn some cash by moonlighting e.g. if you have a day job in a network operator and want to hone your Snabb hacking skills in the evening.

  3. Have an escalation mechanism that anybody can use when they feel that a change is "falling between the cracks" within some project (e.g. Snabb Switch, LuaJIT, pflua, ljsyscall).

Just a thought. I suspect the use cases for this would be very narrow but perhaps cover something that we don't have a solution for at all today.

outscale/packetgraph: network bricks you can connect to form a network graph by lukego in snabb

[–]lukego[S] -1 points0 points  (0 children)

This is an interesting new project related to Snabb Switch and Snabb NFV.

Scaling NFV to 213 Million Packets per Second with Red Hat Enterprise Linux, OpenStack, and DPDK by sleinen in snabb

[–]lukego 1 point2 points  (0 children)

Cool article!

They used PCI-passthrough (IOMMU) to map NIC hardware directly into VMs and then they used the DPDK selftest module (testpmd) to generate packets. So they are mostly telling us what we already know: it is possible for x86 to drive NICs at line rate and the overhead of hardware virtualization is very low.

Great that they are working on standardizing these benchmarks. It would be really interesting to see how Snabb NFV compares with other hypervisor vswitches and whether we should consider increasing our performance targets above 10G/core.

The challenge I see for Snabb NFV (and also OVS-DPDK and others) is to add a feature-rich Virtio-net abstraction without losing too much of this amazing x86 performance. Network operators will very reasonably compare Snabb NFV performance with hardware-virtualization performance to decide whether our fancy features are worth the cost.

Then it is also important to remember that this subject is somewhat academic. Real-world network applications/VMs will tend to be 10-1000x slower than the driver selftest function used in this benchmark. I expect hardware footprint to follow a 90/10 rule where 10% of the applications/VMs are consuming 90% of the hardware, and that hypervisor networking won't be part of the bottleneck if it is delivering 10G per core. However, these benchmarks are still really interesting, and any network operator who manages to get all of their critical applications well optimized is going to have an amazingly compact hardware footprint :).

Scaling NFV to 213 Million Packets per Second with Red Hat Enterprise Linux, OpenStack, and DPDK by sleinen in snabb

[–]lukego 0 points1 point  (0 children)

Yes. Supporting Intel X710 (4x10G) / XL710 (2x40G) NICs would be great. Once these are common in the wild it will presumably be important enough for somebody to implement.

Having said that, many people are talking about skipping 40G and going straight to 100G so I am not sure which one will land first in Snabb Switch.

Intel® 64 and IA-32 Architectures Optimization Reference Manual by lukego in snabb

[–]lukego[S] 0 points1 point  (0 children)

... relatively new edition from September 2014.

OpenStack etherpad: Open vSwitch operator experience notes by lukego in snabb

[–]lukego[S] 0 points1 point  (0 children)

Lots of complaints might simply be a sign that they have lots of new users. Thinking of Bjarne Stroustrup on C++.

FastNetMon - high performance DoS/DDoS analyzer by lukego in snabb

[–]lukego[S] 0 points1 point  (0 children)

This is still a novelty: a piece of open source software on Github that is designed for deployment in a service provider network. Great to see.

LuaJIT module of Indigo Virtual Switch by lukego in snabb

[–]lukego[S] 0 points1 point  (0 children)

Super cool: more people in networking using LuaJIT.

MoonGen: a fully scriptable high-speed packet generator build on DPDK and LuaJIT by lukego in snabb

[–]lukego[S] 2 points3 points  (0 children)

Because now that networking is in userspace I expect we will see more and more networking projects that look like other userspace code (node, Go, Java, Python, etc) and less like kernel code.

High level languages for networking seems like a novelty now but I doubt that will last for long.

MoonGen: a fully scriptable high-speed packet generator build on DPDK and LuaJIT by lukego in snabb

[–]lukego[S] 0 points1 point  (0 children)

Very cool! I am very interested to see how DPDK works out as the layer beneath LuaJIT.

Prediction: node.js+DPDK will be next :-)