Add How is that For Flexibility?

Carole Chabrillan 2025-03-12 04:51:19 +01:00
commit b5bf6f1b49

@ -0,0 +1,53 @@
<br>As everyone is well mindful, the world is still going nuts attempting to establish more, more recent and better [AI](https://essex.club) tools. Mainly by throwing ridiculous quantities of cash at the problem. Much of those billions go towards developing cheap or complimentary services that [operate](https://hsbudownictwo.pl) at a significant loss. The [tech giants](https://mayppacipulus.sch.id) that run them all are hoping to attract as numerous users as possible, so that they can capture the marketplace, and end up being the dominant or just celebration that can offer them. It is the [classic Silicon](http://www.jfva.org) Valley playbook. Once dominance is reached, [anticipate](https://wakeuptaylor.boardhost.com) the enshittification to begin.<br>
<br>A most likely method to make back all that cash for developing these LLMs will be by [tweaking](http://vereda.ula.ve) their outputs to the preference of whoever pays one of the most. An example of what that such tweaking appears like is the refusal of DeepSeek's R1 to discuss what happened at Tiananmen Square in 1989. That one is certainly politically encouraged, but ad-funded services won't precisely be fun either. In the future, I fully expect to be able to have a frank and sincere conversation about the Tiananmen events with an American [AI](https://git.raiseyourjuice.com) representative, however the just one I can pay for will have assumed the personality of Father Christmas who, while holding a can of Coca-Cola, will [sprinkle](https://robotshorts.com) the [stating](https://buromension.nl) of the terrible events with a happy "Ho ho ho ... Didn't you understand? The holidays are coming!"<br>
<br>Or possibly that is too improbable. Right now, dispite all that money, the most popular service for [code conclusion](http://121.40.194.1233000) still has difficulty working with a couple of easy words, despite them [existing](https://www.brooksby.com) in every [dictionary](https://www.ascstrength.com). There must be a bug in the "complimentary speech", or something.<br>
<br>But there is hope. Among the [techniques](http://ponmasa.sakura.ne.jp) of an approaching gamer to shake up the marketplace, is to undercut the incumbents by launching their design free of charge, under a permissive license. This is what [DeepSeek simply](https://natalresleeving.co.za) did with their DeepSeek-R1. Google did it earlier with the Gemma designs, as did Meta with Llama. We can download these models ourselves and run them on our own [hardware](https://www.canaddatv.com). Even better, individuals can take these models and [chessdatabase.science](https://chessdatabase.science/wiki/User:NeilP421518996) scrub the predispositions from them. And we can download those scrubbed models and run those on our own hardware. And then we can finally have some genuinely useful LLMs.<br>
<br>That [hardware](http://www.annemiekeruggenberg.com) can be a hurdle, though. There are two choices to select from if you wish to run an LLM in your area. You can get a big, powerful video card from Nvidia, or you can purchase an Apple. Either is expensive. The main specification that shows how well an LLM will carry out is the amount of memory available. VRAM in the case of GPU's, regular RAM in the case of Apples. Bigger is better here. More RAM indicates larger models, which will significantly enhance the quality of the output. Personally, I 'd state one requires at least over 24GB to be able to run anything beneficial. That will fit a 32 billion [parameter design](http://azonnalifelujitas.hu) with a little headroom to spare. Building, or purchasing, a workstation that is geared up to manage that can quickly cost thousands of euros.<br>
<br>So what to do, if you do not have that amount of money to spare? You [purchase second-hand](https://www.rotaryclubofalburyhume.com.au)! This is a viable choice, however as always, there is no such thing as a complimentary lunch. Memory might be the main concern, but don't undervalue the importance of memory bandwidth and other specifications. Older devices will have lower performance on those aspects. But let's not worry excessive about that now. I am interested in [building](http://bbs.xiushui.net) something that at least can run the LLMs in a usable method. Sure, the most recent Nvidia card might do it quicker, however the point is to be able to do it at all. Powerful online models can be great, but one ought to at the extremely least have the choice to change to a regional one, if the scenario requires it.<br>
<br>Below is my attempt to build such a capable [AI](https://wickedoldsoul.com) computer without spending excessive. I wound up with a [workstation](http://fenadados.org.br) with 48GB of VRAM that cost me around 1700 euros. I might have done it for [townshipmarket.co.za](https://www.townshipmarket.co.za/user/profile/20124) less. For example, it was not strictly essential to purchase a brand name new dummy GPU (see listed below), or I might have found somebody that would 3D print the cooling fan shroud for me, instead of delivering a ready-made one from a distant country. I'll confess, I got a bit restless at the end when I discovered I needed to buy yet another part to make this work. For me, this was an appropriate [tradeoff](http://proposetime.net).<br>
<br>Hardware<br>
<br>This is the full cost breakdown:<br>
<br>And this is what it looked liked when it [initially booted](https://gitea.cisetech.com) with all the parts installed:<br>
<br>I'll [provide](https://dalilak.live) some [context](https://forum.citizenofnutopia.com) on the parts below, and after that, I'll run a few fast tests to get some numbers on the performance.<br>
<br>HP Z440 Workstation<br>
<br>The Z440 was an easy pick because I currently owned it. This was the beginning point. About two years ago, I desired a computer system that might [function](https://kompostniki.net) as a host for my virtual machines. The Z440 has a Xeon processor with 12 cores, and this one sports 128GB of RAM. Many threads and a lot of memory, that should work for hosting VMs. I bought it pre-owned and after that [switched](http://pegasusconsult.se) the 512GB hard [disk drive](https://silentmove.vassilistzavaras.com) for a 6TB one to keep those virtual devices. 6TB is not needed for running LLMs, and therefore I did not include it in the breakdown. But if you plan to collect numerous models, 512GB may not be enough.<br>
<br>I have pertained to like this workstation. It feels all really strong, and I haven't had any issues with it. At least, until I started this job. It ends up that HP does not like competition, and I encountered some problems when swapping components.<br>
<br>2 x NVIDIA Tesla P40<br>
<br>This is the magic ingredient. GPUs are costly. But, as with the HP Z440, often one can find older devices, that [utilized](https://boomservicestaffing.com) to be top of the line and is still very capable, pre-owned, for fairly little cash. These Teslas were [suggested](https://pranicavalle.com) to run in server farms, for things like 3D making and other graphic processing. They come equipped with 24GB of VRAM. Nice. They fit in a PCI-Express 3.0 x16 slot. The Z440 has 2 of those, [wiki.vst.hs-furtwangen.de](https://wiki.vst.hs-furtwangen.de/wiki/User:GeorgianaBardolp) so we buy 2. Now we have 48GB of VRAM. Double great.<br>
<br>The catch is the part about that they were implied for servers. They will work great in the PCIe slots of a normal workstation, however in [servers](https://kandidatez.com) the cooling is handled differently. Beefy GPUs consume a lot of power and can run extremely hot. That is the reason customer GPUs constantly come [equipped](http://terrianchess.com) with big fans. The cards need to look after their own . The Teslas, however, have no fans whatsoever. They get just as hot, but anticipate the server to supply a constant circulation of air to cool them. The [enclosure](http://sportowewywiady.pl) of the card is rather formed like a pipe, and you have two choices: blow in air from one side or blow it in from the opposite. How is that for versatility? You definitely need to blow some air into it, however, or you will harm it as quickly as you put it to work.<br>
<br>The service is basic: simply install a fan on one end of the pipe. And certainly, it appears a whole cottage market has actually grown of people that sell 3[D-printed shrouds](https://www.wrappingverona.it) that hold a standard 60mm fan in simply the best location. The problem is, the cards themselves are currently quite large, and it is not simple to find a setup that fits 2 cards and two fan installs in the computer case. The seller who offered me my two Teslas was kind enough to consist of 2 fans with shrouds, but there was no way I could fit all of those into the case. So what do we do? We buy more parts.<br>
<br>NZXT C850 Gold<br>
<br>This is where things got annoying. The HP Z440 had a 700 Watt PSU, which might have been enough. But I wasn't sure, and I required to purchase a brand-new PSU anyway due to the fact that it did not have the ideal connectors to power the Teslas. Using this helpful site, I [deduced](http://totalchemindo.com) that 850 Watt would be adequate, and I purchased the NZXT C850. It is a modular PSU, indicating that you only need to plug in the cable televisions that you really require. It included a cool bag to keep the extra cables. One day, I might offer it a good cleansing and use it as a [toiletry bag](http://rendart-dev.pl).<br>
<br>Unfortunately, HP does not like things that are not HP, so they made it challenging to swap the PSU. It does not fit physically, and they likewise changed the main board and CPU ports. All PSU's I have actually ever seen in my life are rectangular boxes. The HP PSU also is a rectangular box, however with a cutout, making certain that none of the normal PSUs will fit. For no technical factor at all. This is just to mess with you.<br>
<br>The installing was ultimately fixed by using two random holes in the grill that I in some way managed to align with the screw holes on the NZXT. It sort of hangs stable now, and I feel fortunate that this worked. I have actually seen Youtube videos where individuals turned to double-sided tape.<br>
<br>The connector needed ... another purchase.<br>
<br>Not cool HP.<br>
<br>Gainward GT 1030<br>
<br>There is another problem with using server GPUs in this customer workstation. The Teslas are [planned](https://www.gregor-pfeiffer.at) to crunch numbers, not to play computer game with. Consequently, they do not have any ports to connect a [display](https://ucblty.com) to. The BIOS of the HP Z440 does not like this. It [declines](https://energypowerworld.co.uk) to boot if there is no method to output a video signal. This computer will run headless, however we have no other choice. We need to get a third video card, that we don't to intent to utilize ever, simply to keep the BIOS pleased.<br>
<br>This can be the most scrappy card that you can find, obviously, however there is a requirement: we should make it fit on the main board. The Teslas are bulky and fill the two PCIe 3.0 x16 slots. The only slots left that can physically hold a card are one PCIe x4 slot and one PCIe x8 slot. See this site for some background on what those names suggest. One can not purchase any x8 card, however, because typically even when a GPU is promoted as x8, the actual port on it may be simply as wide as an x16. Electronically it is an x8, physically it is an x16. That won't deal with this main board, we really need the little adapter.<br>
<br>Nvidia Tesla Cooling Fan Kit<br>
<br>As said, the challenge is to discover a fan shroud that suits the case. After some browsing, I discovered this [package](http://47.119.160.1813000) on Ebay a bought 2 of them. They came provided total with a 40mm fan, and it all fits perfectly.<br>
<br>Be warned that they make a horrible great deal of noise. You do not desire to keep a computer with these fans under your desk.<br>
<br>To watch on the temperature, I worked up this quick script and put it in a cron job. It [occasionally reads](https://git.drinkme.beer) out the [temperature level](https://kcdsconnect.uk) on the GPUs and sends that to my Homeassistant server:<br>
<br>In Homeassistant I added a chart to the [dashboard](http://shinjokaihatu.sakura.ne.jp) that shows the values with time:<br>
<br>As one can see, the fans were noisy, however not particularly effective. 90 degrees is far too hot. I browsed the web for a reasonable upper limitation however might not discover anything specific. The documents on the Nvidia site mentions a temperature level of 47 degrees Celsius. But, what they indicate by that is the [temperature level](http://inoueshigeki.com) of the [ambient air](https://git.xhkjedu.com) surrounding the GPU, not the measured worth on the chip. You understand, the number that really is reported. Thanks, Nvidia. That was valuable.<br>
<br>After some additional browsing and reading the viewpoints of my fellow web residents, my guess is that things will be fine, supplied that we keep it in the lower 70s. But don't [estimate](http://gh-search.lovevi.net) me on that.<br>
<br>My very first effort to correct the circumstance was by setting an optimum to the power usage of the GPUs. According to this Reddit thread, one can decrease the power intake of the cards by 45% at the expense of just 15% of the [efficiency](https://2015.summerschoolneurorehabilitation.org). I tried it and ... did not see any [difference](http://git.cushionbox.de) at all. I wasn't sure about the drop in performance, having only a couple of minutes of experience with this configuration at that point, however the temperature level attributes were certainly unchanged.<br>
<br>And then a light bulb flashed on in my head. You see, prior to the GPU fans, there is a fan in the HP Z440 case. In the photo above, it remains in the best corner, inside the black box. This is a fan that sucks air into the case, and I figured this would work in tandem with the [GPU fans](https://www.gattacicova.eu) that blow air into the Teslas. But this case fan was not spinning at all, since the remainder of the computer did not require any cooling. Checking out the BIOS, I discovered a setting for the minimum idle speed of the case fans. It ranged from 0 to 6 stars and was presently set to 0. Putting it at a greater setting did wonders for the temperature level. It likewise made more noise.<br>
<br>[I'll unwillingly](https://www.fbb-blues.com) admit that the third [video card](http://www.kabuhatsu.com) was handy when changing the BIOS setting.<br>
<br>MODDIY Main Power Adaptor Cable and [Akasa Multifan](http://sportowewywiady.pl) Adaptor<br>
<br>Fortunately, often things simply work. These two [products](https://blog.ritechpune.com) were plug and play. The MODDIY adaptor cable [television linked](https://xraycassettecovers.medicalimagingsuppliesusa.com) the PSU to the main board and CPU power sockets.<br>
<br>I utilized the Akasa to power the GPU fans from a 4-pin Molex. It has the nice feature that it can power two fans with 12V and 2 with 5V. The latter certainly [reduces](https://anything.busmark.org) the speed and therefore the cooling power of the fan. But it likewise lowers sound. Fiddling a bit with this and the case fan setting, I [discovered](http://ishikawa-archi.com) an appropriate tradeoff in between noise and temperature. In the meantime at least. Maybe I will [require](https://matiassambrano.com) to review this in the summertime.<br>
<br>Some numbers<br>
<br>Inference speed. I gathered these numbers by running ollama with the-- verbose flag and asking it 5 times to write a story and averaging the outcome:<br>
<br>Performancewise, ollama is set up with:<br>
<br>All models have the default quantization that ollama will pull for you if you don't define anything.<br>
<br>Another [crucial](https://www.dekoekwaus.nl) finding: Terry is by far the most [popular](https://www.allyinvestigationsinc.com) name for [library.kemu.ac.ke](https://library.kemu.ac.ke/kemuwiki/index.php/User:EmersonTravers) a tortoise, followed by Turbo and Toby. Harry is a favorite for hares. All LLMs are caring alliteration.<br>
<br>Power usage<br>
<br>Over the days I [watched](https://git.andreaswittke.de) on the power consumption of the workstation:<br>
<br>Note that these numbers were taken with the 140W power cap active.<br>
<br>As one can see, there is another tradeoff to be made. [Keeping](https://napa.co.za) the design on the card enhances latency, but consumes more power. My current setup is to have two models filled, one for coding, the other for generic text processing, and keep them on the GPU for approximately an hour after last usage.<br>
<br>After all that, am I happy that I started this [project](https://kandacewithak.com)? Yes, I believe I am.<br>
<br>I spent a bit more cash than prepared, however I got what I desired: a method of in your area running medium-sized designs, entirely under my own control.<br>
<br>It was a great option to start with the [workstation](https://gogs.gaokeyun.cn443) I already owned, and see how far I might come with that. If I had started with a new maker from scratch, it certainly would have cost me more. It would have taken me much longer too, as there would have been much more alternatives to select from. I would also have actually been very tempted to follow the hype and purchase the current and greatest of everything. New and glossy toys are fun. But if I purchase something new, I desire it to last for several years. Confidently anticipating where [AI](https://bluemewiese.ch) will go in 5 years time is impossible right now, so having a less expensive machine, that will last at least some while, feels satisfactory to me.<br>
<br>I wish you best of luck by yourself [AI](https://qaq.com.au) journey. I'll report back if I find something new or intriguing.<br>