Strawberry jam supply secured! 🍓👨🍳
=> More informations about this toot | View the thread
A kind of store I'd like to exist: glue cabinet
They stock many varieties of glues, probably also solder paste. They make sure they are fresh, stored at the proper conditions and so on. You can rent a cartridge, use some of it and bring back the used cartridge the next day or so. You pay by the weight used.
Written by someone that just threw away a 90% full cartridge that was cured to a block - again ☠️
=> More informations about this toot | View the thread
Trying to understand a datasheet of a NTC, Murata NXFT15XH103FEAB021. They specify the beta/B value, very good, I understand how to calculate temperature / resistance with it.
But they also mention other B-Constants as "Reference Values" for different temperature ranges. I don't understand what they want to tell me with them. Can anyone explain?
https://www.murata.com/en-eu/products/productdetail?partno=NXFT15XH103FEAB021
[#]electronics #datasheet
=> More informations about this toot | View the thread
The way it works is like this:
To light up green the input on the left is pulled low. The reference input of the TL431 is below 2.5V, so it doesn't conduct.
When the input is in HiZ, the 3.3V supply voltage passes through the green LED. This reduces the voltage to about 1.5V. This is still below the threshold of the TL431, so both LEDs are off.
When the input is at 3.3V, the ref of the TL431 is above the threshold, so it pulls it's cathode input down to about 2V. It needs at least 1 mA supply through the cathode to work, so I supply it through R37. It can't be supplied just through Q7 because it begins to pull a bit current before the threshold is reached and we don't want Q7 to be turned on just yet. R36 is needed because the ref input will sink current once you crossed the threshold and that needs to be limited.
When the TL431 is turned on and pulls low it's cathode, it pulls current through Q7 which then switches on the gate of Q8, switching on the red LED.
The LED I'm using is in a common-anode configuration, so I need the extra stage with Q8. This wouldn't be necessary for LEDs that have both pins individually pinned out.
=> More informations about this toot | View the thread
Common problem: the microcontroller I selected has just one GPIO less than I would normally need.
I want to use a nice RGB side LED and drive at least the red and green part (and full off), but I've just got one pin on the MCU for this. I know Charlieplexing, but it is just something for two or more pins.
Here is what I've come up with: Use a TL431 as comparator to differentiate between low/HiZ and high.
[#]electronics #protoboard #analog
=> View attached media | View attached media | View attached media
=> More informations about this toot | View the thread
The trackpoint works exactly as on the Thinkpads, so after using a Thinkpad I felt directly at home. The "DIY" version of the Keyboard is higly configurable. Not only can you install your own keyswitches, but you can choose different layout styles like ANSI or ISO return keys, split the shift and space keys and so on. This is done with metal inserts that you can screw into the switch holding plate. You get a nice selection of keycaps with it, but I bought an extra set of German keycaps and mixed them to have the keycaps with the correct print for my layout. I also ordered the aluminium front bezel because that is the only way to get the two keys above the left and right cursor keys, used for convenient PageUp/Down and Home/End with Fn.
I have Cherry Ergo Clear switches installed. After adding rubber dampening rings I quite like them.
The keyboard is programmable, so you can program the scancode you want each key to output, what to do on the Fn-layer and also implement small macros and so on. So for example if you prefer Ctrl/Fn instead of Fn/Ctrl, this is easy to change. You can click all this together on the Tex website and then create a small config file you upload to the keyboard.
Only downside is that the firmware of the keyboard is not Open Source like on my Keyboardio. Also the config tool isn't available offline or the config file documented, so should Tex close shop it would need a bit of RE to reconfigure your layout.
But other than those non-open downsides I really like it.
=> More informations about this toot | View the thread
I've got a new keyboard for several weeks now that I want to show: the Tex Shura
I was looking for a small keyboard for my electronics bench because the bench is always full and I still need a keyboard there. While there are even smaller models out there, I wanted something still comfortably usable and where I don't have to learn complicated chordings. It saves space by moving the F-keys onto an Fn-layer. Also it has a trackpoint, so I don't need a mouse on the bench anymore.
I find this keyboard so nice that since getting it I have mostly moved it to the couch and use it for browsing there, using the bluetooth option to connect it to my livingroom pc. I guess I will order another one for the electronics bench...
Thanks go to @chipperdoodles for making me aware of Tex and their product lineup with a post several weeks ago.
https://tex.com.tw/products/shura-diy-type
[#]MechanicalKeyboard #keyboard #review
=> View attached media | View attached media | View attached media
=> More informations about this toot | View the thread
So all in all I think the ConnectX-6 is a good card to use when you want to set up a virtualization server that is hooked up to more than just a gigabit port. When you have two LACP-bonded ports for increasing bandwith and/or reliability, the internal switch is quite a unique feature you really want. If you just have one upstream port, the cheaper Intel
E810 could also fit.
I plan to use one card in a server I want to put in colocation. But one thing I think I have to figure out first is how to best manage the additional sofware complexity of either devlink/tc or Open vSwitch. This is such a core part of a VM setup that it really must be reliable and you have to feel confident in the solution you choose.
=> More informations about this toot | View the thread
While researching this I found out that Intel also offers something similar with their "eSwitch" feature of their E810 cards. Since I had such a card on hand I also tried it out:
It offers a switchdev driver and rules just like the Mellanox card and you can use it with or without Open vSwitch to control the connection to your VMs and virtual functions. They also offer advanced rules and actions like VLAN-tagging and filtering on port numbers etc. The performance is even a few single-digit MBytes/s better than on the Mellanox card.
But there is one important limitation: virtual functions are tightly bound to one physical port. You can't bond the physical ports together or add them both into one big bridge. The driver complains and errors out when you try to do that. They also explain this limitation in their readme.
While they claim that their cards are highly flexible and can be reconfigured with firmware (they call it "DDP"), I'm not sure if this limitation is something that they can work around with software/gateware in the future or if it is a hard limitation of their ASIC.
=> More informations about this toot | View the thread
Controlling the switch is done with the kernel switchdev driver and the devlink and tc tools. Basic rules like VLAN-tagging are supported of course, but you can also do more complex things like L3 routing and routing based on TCP port numbers. So you could for example take one IPv4 and divide it among several VMs based on port numbers.
tc and devlink are the more barebones interface to this. In their manual they suggest to use Open vSwitch to manage this. What it does is quite clever: it implements a quite capable software switch with the OpenFlow rule language, a management process and it's own small database backend. Packets are sent to this software switch first and (slowly) switched in software according to the rules you set.
When this first packet is forwarded, the management process also calculates the minimal rules that were necessary to forward this packet and subsequent similar ones. Then it creates a tc rule to offload this to hardware, so the following packets are switched purely in hardware. This ensures that only the rules that are actually used right now are configured on the switch ASIC, reducing bloat on the ASIC and improving switching speed.
Downside is that Open vSwitch and OpenFlow introduce an extra layer and complexity that has to be managed and understood. There seems to be a Ansible collection to manage Open vSwitch, but I didn't see
an easy way to use it to manage complex OpenFlow rules. But maybe I missed it because I just had a short look at it.
[#]openvswitch
=> More informations about this toot | View the thread
I'm continuing with my tests of the Mellanox/Nvidia ConnectX-6 Dx cards I got.
Over the last days I tested the integrated switching apabilities, called ASAP² by Mellanox. So what is it good for? Providing fast network access to virtual machines.
The conventional methods how this is done for KVM-based VMs on Linux is either a kernel bridge device, MacVTap or regular routing. But as you can see in mybenchmark graph, these methods have quite severe performance limits.
The alternative is Single Root I/O Virtualization (SR-IOV). A network adapter with this capability (nearly all server adapters offer this today) splits out several "virtual function" PCIe-devices. Then the virtual function devices
are made directly accessible to the VMs with the IOMMU of the CPU. It is faster because now the CPU doesn't have to do context switching between the VM and host.
While SR-IOV is available for quite some time now (it was first introduced around 2008), the implementations often had a few downsides:
ports and SR-IOV virtual functions together. This switch can be controlled with the kernel
switchdev interface and it is able to apply complex switching rules.
As you can see in the light-blue line in my benchmark graph, it is able to do LACP bonding of two physical ports and applying a layer3+4 xmit_hash_policy to utilize both bonded ports. So a VM is hooked up with just one virtual function and doesn't have to care about bonding at all. If either port of the bond is disconnected, the other one is used (I tested this to be really sure it is supported).
This is quite a good feature and something I haven't seen from other vendors.
[#]networking #mellanox #homelab #virtualization
=> More informations about this toot | View the thread
What I'm listening to today: OCEANS OF SLUMBER - The Waters Rising
https://oceansofslumber.lnk.to/TheWatersRising-SingleID
[#]metal
=> More informations about this toot | View the thread
The last 2 weeks I tried & failed to reproduce the measurements done by the EMC lab. I upgraded my test equipment at work to no avail and began to doubt my understanding of the matter.
Then it dawned on me that the impedance on the measured line isn't something they could calibrate for all different customer DUTs, so it should have been measured, and this was something they didn't do back then. Then it was a bit of back and forth convicing them of their error, but in the end they agreed to redo the measurement.
For the readers deeper into this topic: this is conducted emissions on a ethernet network cable, measured according to C.4.1.6.3 of EN 55032. They didn't measure impedance per C.4.1.7 and the ferrite they used between the 150 Ohms resistor and the auxiliary equipment (AE) wasn't good enough, resulting in much too low total impedance.
=> More informations about this toot | View the thread
You may remember my post about the completely failed EMC test from about 3 weeks ago - I finally got proof of what the issue was: the EMC test lab bodged the test method and didn't ensure proper impedance on the measured cable.
The two attached graphs are from the same device, but just one measurement is done properly...
[#]electronics #emc #emi #emissions
=> View attached media | View attached media
=> More informations about this toot | View the thread
@gsuberland I tried to get this to work with samba too, but it seems like the samba multichannel implementation sticks to just one TCP connection per network card/link (two in my case), so it doesn't scale by available cores and the performance isn't really boosted in comparison to the regular singlechannel config.
=> More informations about this toot | View the thread
thanks to @gsuberland for pointing me to his blog post about multichannel configuration on Samba - this triggered me researching if there is something similar for NFS - and it is: the nconnect= mount option.
And as you can see in the chart below it makes some difference...
In this scenario the hw crypto offload gets you quite near to the unencrypted performance. Also the bonded channels are properly utilized too. The only issue is that your application on the client has to send multiple requests in parallel to make use of this.
=> More informations about this toot | View the thread
Since I was already benchmarking this topic, I thought about what the best way to improve encrypted fileserver performance would be.
So I replaced my slowish Epyc 7232P (8-core, Zen 2) with another Ryzen 7950X3D (16-core, Zen 4) like on my client.
As you can see investing in a faster CPU gives much better results than the crypto offloading and either NFS RPC-with-TLS or Samba become reasonable options.
=> More informations about this toot | View the thread
My guess is that the reason for the limited speed with the crypto offloading of my ConnectX-6 Dx is that the IC on the network card runs into it's limit in encryption speed for one connection or TLS flow. Combine that with the time consuming setup of each connection/TLS flow, and the usefulness of the whole idea gets smaller and smaller.
Or is this something they improved with the next gen ConnectX-7? I haven't seen Mellanox/Nvidia post any figures about crypto offload speed...
Does anybody know of some figures or has tested it? Maybe @manawyrm ?
=> More informations about this toot | View the thread
Since I now had just one TCP connection, LACP-bundling my two links became useless since the xmit_hash_policy=layer3+4 results to all packets being sent over the same link. So 25 GBit or roughly 3.1 GBytes/s was the theoretical limit.
I could see several nfsd kernel threads being used and being spread over different cores. So the NFS part profits from multiple cores. But probably the ktls part doesn't because all the data is stuffed into one TCP connection in the end. Maybe there is some path for future optimization? NFS RPC-with-TLS is very new code, so I have some hope that it's speed will improve in the future.
=> More informations about this toot | View the thread
Getting the NFS RPC-with-TLS encryption offloaded was a bit more tricky:
Either the kernel ktls doesn't currently support offloading TLS 1.3 at all or at least the Mellanox driver doesn't support it yet. The code in the mlx5 kernel driver makes it clear that it currently just supports TLS 1.2.
But NFS RPC-with-TLS is very new code, so it is only designed to work with TLS 1.3.
I had to make some dirty hacks to tlshd (the userspace daemon that initiates the TLS connection before handing it off to the kernel) to get it to work by forcing TLS 1.2 and a matching cipher on the client and server.
So this is probably not something you want to run in prod until the mlx5 driver gains TLS 1.3 offloading.
=> More informations about this toot | View the thread
=> This profile with reblog | Go to electronic_eel@treehouse.systems account This content has been proxied by September (ba2dc).Proxy Information
text/gemini