

I have frigate installed on my SSD with records storing in my HDD ZFS pool which works just fine. I just have the storage pool set as a mount point for the frigate LXC.
I have frigate installed on my SSD with records storing in my HDD ZFS pool which works just fine. I just have the storage pool set as a mount point for the frigate LXC.
No worries, it is just my internal frustration with Nvidia coming out in my comment. It could be a good device if it weren’t abandoned by them years ago while that fact seems to be lost in their current pricing. There isn’t really anything comparable on the market, but I don’t think it’s worth the price in it’s current form.
deleted by creator
As many as your hard drives or upload bandwidth can handle since they would be playing directly and not transcoding.
As many as most GPUs without all the extra cost and power draw. Nvidia sets a transcode limit of 2 sessions unless you disable it. You really shouldn’t ever be transcoding 4k content. Most people will duplicate 1080p and 4k content and not share the 4k library for remote streaming/external users to avoid transcoding, and 1080p transcodes are no sweat. Furthermore, the goal should be to avoid transcoding wherever possible, so it’s unlikely that you’d have multiple people doing intensive transcoding simultaneously if you follow the above advice. You’ll want everyone to direct play as much as possible.
Technical friends are the best friends.
I don’t know how Kodi still goes on for this long. I messed around with it over a decade ago and had all the same issues back then.
And it runs Google services, and it costs a fortune, and it hasn’t seen a refresh in 6 years.
If they have a 5th gen or newer Intel CPU, Quicksync will work excellently for transcoding. No discrete GPU needed.
Don’t use Mullvad for torrenting. They’re a great VPN but they had to remove port forwarding so you’ll be unable to torrent properly. AirVPN is an alternative that still has port forwarding available.
I can’t answer your question as I rely on Plex rather than fooling around with my own security, but I’d suggest reconsidering the Pi and a microSD to host Jellyfin. Neither one of these are a good fit unless you plan on sticking to very specific audio and video codecs to avoid all transcoding and your upload speeds are capable of serving the full bitrate of your files. Beyond that, SD cards are terrible for this kind of task and you’d be much better served with an SSD as your boot/data drive for robustness. I can’t even count the number of failed SD cards I’ve had over the years.
If you’re trying to watch 4k content in a browser, AFAIK, Edge is the only one capable.
I know it’s “cheap insurance,” and I’ll never convince you otherwise (nor do I intend to – you do you), but it’s really just a waste of money/oil with modern synthetics. Even if you stretched it out to just 5k you’d be saving almost half as much oil/money while maintaining the same protection. Using a quality filter (factory OEM, Wix) is important too.
I’ve put around 180k miles on my Toyota in the last 9 years with 9k-10k intervals and it runs great as well with a sparkling interior under the valve cover.
I believe 19k on modern engines with modern oil but have a hard time believing they recommended the same on many vehicles pre ~2000 when engines and oil were much less robust than they are now.
I also run around 10k miles between changes, but newer engines (2013 Camry) are much easier on the oil than a straight-six from 1992 so I’d be hesitant to push it quite as far without doing an oil analysis. You could also just change the filter and keep the same oil at 5k then change both at 10k (again depending on how dirty the engine makes it).
Why’s this guy doing oil changes every 3k miles on his Jeep? Just spend the extra $5 for synthetic and push it out to 5k+ miles.
Edit: this does seem interesting but I think it would work better as a smartphone app that syncs with your home server. I drive a lot for work and it would be a huge pain in the ass to continually track mileage and whatnot on my desktop (or presumably from a webui on my phone but only in range of my wifi).
What displays when you run “id” as your user? You’ll want it to match what your inputting in the docker compose. I may have missed it but I didn’t see you identify what your personal UID and GID are in the Google doc.
As a janky fallback, what if you just added a new smb user and password and see if that one connects: sudo smbpasswd -a <username>
Try the users suggestion from the other post and run “ls -an” to see the numeric user IDs rather than the names you’re assigning. I’ve recently been building a new server with proxmox and learned this same lesson already as user “1000” gets assigned as user “100000” inside containers there to prevent it from having host permissions automatically from my understanding.
You can set individual seed time/ratios in the indexer settings for each tracker in sonarr and radarr.
I run everything in LXC containers because AFAIK using VMs mean you are limited on shared resources. If I want to use the iGPU for Plex and something else it would be locked to only work on the Plex VM. I mainly just have an unprivileged LXC and a second privileged LXC both running portainer that run most of my services.