It’s Sunday somewhere already so why wait?
Let us know what you set up lately, what kind of problems you currently think about or are running into, what new device you added to your homelab or what interesting service or article you found.
I’ll post my ongoing things later/tomorrow but I didn’t want to forget the post again.
Why is it so hard to send large files?
Obviously I can just dump it on my server and people can download it from a browser but how are they gonna send me anything? I’m not gonna put an upload on my site, that’s a security nightmare waiting to happen. HTTP uploads have always been wonky, for me, anyway.
Torrents are very finnicky with 2-peer swarms.
instant.io (torrents…) has never worked right.
I can’t ask everyone to install a dedicated piece of software just to very occasionally send me large files
Sending is someone else’s problem. They have all sorts of different understandings and tools and I can’t deal with them all. So the only alternative is to set them up with an account in (e.g.) Nexcloud or just accept whatever Google service they use to send you a large file.
Sending other people files is easy in Nextcloud, just create a shared link and unshare when done. Set a password on the file itself.
Sending is someone else’s problem.
It becomes my problem when I’m the one who wants the files and no free service is going to accept an 80gb file.
It is exactly my point that I should not have to deal with third parties or something as massive and monolithic as Nextcloud just to do the internet equivalent of smoke signals. It is insane. It’s like someone tells you they don’t want to bike to the grocer 5 minutes away because it’s currently raining and you recommend them a monster truck.
OK 80 GB is for sure an edge case. Nextcloud won’t even work for that due to PHP memory limits, I think.
Interesting problem. FTP is an option, with careful instructions to an untutored user. Maybe rsync over a VPN connection if it is always the same sender.
Not even sure what else would reliably work, except Tannenbaum’s Adage.
Thanks for the mention :>
Yeah, copyparty was my attempt at solving this issue - a single python-file for receiving uploads of infinitely large files, usually much faster than other alternatives (ftp, sftp, nextcloud, etc.) especially when the physical distance to the uploader is large (hairy routing).
I’m not gonna put an upload on my site, that’s a security nightmare waiting to happen.
curious to hear your specific concerns on this; maybe it’s something that’s already handled?
I already saw copyparty but it appears to me to be a pretty large codebase for something so simple. I don’t want to have to keep up with that because there’s no way I’m reading and vetting all that code; it becomes a security problem.
It is still easier and infinitely more secure to grab a USB drive, a bicycle and just haul ass across town. Takes less time, too.
You could always toss it in a sandbox for some isolation :> but yeah I get you, all of the optional features does mean more code.
It’s a shame that browsers make stuff like chunked uploading so tricky, so even just the essentials would be a fair bit of logic – and you won’t get optimal upload speeds without sending chunks in parallel. And the corruption detection is also worth its weight in gold… Ah well, it is what it is hehe
I think that openssh or any ssh or ftp app should facilitate this.
Could you set a ‘password’ on the uploads? So the server will only accept and start the upload if the password is present. The password is a passphrase to make it easy to type in.
I sometimes create them a Nextcloud account and send them the credentials
On a related note, it would be nice if there was a shared storage option for self hosting. It wouldn’t be the same as self hosting, but more like distributed hosting where everyone pools storage they have available and we could have an encrypted sharing option.
You’re describing the world wide web, except giving others write access
Kind of, though with rules. I think I’m describing something closer to structured, encrypted torrents.
Set up pi-hole on my network and I’m realizing it clashes with my VPN on my desktop and private relay on my Apple devices lol. Progress everywhere else though?
I try to install docker (only docker) on the extern hdd… I have some tutorials, but I do not get
What exactly are you trying and on which operating system are you?
I am setting up the server on Raspberry Pi 4 with RaspiOS. I want to download torrents and I have connected an external hdd USB3 for it… I was told that you could change the Docker directory to the external hdd to mount the containers on it. That way the microsd would work less and in case of failure, it would only be to install RaspiOS again and change the directories… All the configuration, docker containers, etc are in the hdd… So far I have not succeeded, although I have listened to 2 or 3 tutorials.
You can also mount everything on the Raspberry, leaving the microSD only for booting, but it is more complicated…
Excuse my DeepL english
I haven’t tried that but good luck!
I don’t have a technical issue at the moment… financial one 🤣
I store my data on an OLD my book Live 3TB HDD. Runs fine but it’s getting full now, and the day it decides to shut down I don’t have any backups.
So I want to buy at least a 2 bays RAID1 array.
I’ll plug it on my server directly, and that would become my NAS.
Maybe two 6TB drives, so I’ll double my existing storage?
I don’t have any backups.
Horror story, stranger. Oh no!
If this is stuff that you can’t afford to lose like family pictures, music library, or 90’s memes or something, I’ve had decent luck with iDrive for my offsite backups. 4TB relatively cheap, works with Linux (using some Perl scripts they made), and you can define your own encryption keys so not even they can see your stuff.
It reliably backs up my NAS.
They’ve usually got a crazy cheap deal to start with on their homepage or if you look around, for the first year. So maybe that could be helpful until you get some other storage. :)
(I think we pay $100 a year now for 4TB)
Setting up let’s encrypt auto cert renewal with ACME. Also looking to setup some monitoring service, basic stuff like CPU, memory usage etc. If anyone has recommendations that have an android app available, that would be awesome.
ACME.sh? I love that little tool.
Cert renewal via DNS-01, independent of any other services or ports. Set it up like 7 years ago and haven’t had to touch it since.
I’m personally using Prometheus Stack and like it, but I just check Grafana in my Android browser. I think Zabbix has an Android app but I don’t know if it has as many possibilities as Prometheus.
Adding a second vdev today to my primary pool running on Scale. New vdev will be 12TB by 4 wide, with existing at 10TB by 5 wide. Drives are all 7,200 RPM enterprise grade, CRM drives.
May also add a second pool with the drives from my previous build which would be 10TB by 4 wide. These drives are 5,400 RPM so I would not use them in my primary pool.
Also, Noctua sent me a bracket (at no charge) so I can correct the orientation of the CPU fan to be facing front to back (currently left to right). I also have a couple 80mm fans and a 120mm fan to add to the server case. Once all of that is in place I hope to start running Ldarr against my libraries without CPU temps hitting 95°C.
I’m trying to figure out why truenas scale refuses to auto start virtual machines… Other than that everything’s smooth atm :>
Working on testing stalwart… And will need to organize and document properly my various nft rules and routing tables, because its slightly getting out of hand…
I’ve gotten a CalDAV server, audiobookshelf, and selfhosted obsidian live sync running on my laptop while I wait for movers to bring my shit to my house. Then gotta migrate it all across to my mini PC afterwards. Doing a modular NixOS setup to replace/complement what I used to have running on proxmox.
Once everything is on a dedicated machine I’m going to make a nice little homepage for it, inspired by a previous thread here.
Considering moving my stuff into a VirtualBox VM or two rather than running directly on my PC. Then at some point in the future when I have the hardware for it I can fairly easily move it to proxmox. Also means installing a clean OS on my main PC is a quicker task as it would just be install virtual box, load up the VMs and a lot of stuff would already be done.
Consider using containers. I used to think this way, though now my goal is to get down to almost all containers since it’s nice to be able to spin up and down just what the one ‘thing’ needs.
Still haven’t properly set up my backups … Have my Nextcloud on a zfs (single disk sadly) and want to send it to a server at my parents place (also zfs) but both are behind NAT. While I’ve successfully set up wireguard between the two, but the connection won’t stay up so there’s still a ways to go till I got a happy off-site Backup.
Maybe Tailscale could be super useful for this!
I kinda shied away from tailscale because “I wanted to do it on my own” but I’ve just set up tailscale (while on a train no less) and it was really simple … Guess I’ll run with it for now :D now I’ll just have to set up the send/receive scripts but that’s just some BASHing my head against a wall ;)
Thanks for the suggestion!
If you want to have more control about it and become independent of the SaaS offering, you can even selfhost headscale, a FOSS tailscale control server. I run it myself with zero issues.
Yeah I saw that. It’s definitely intriguing. For now I’m good with the free tailscale but might look into it. What’s your experience with headscale? It’s mostly a broker right so probably not to Ressource excessive? I have a small public VPS for getting to my selfhosted infrastructure so I might just add in headscale there
Headscale is pretty light on resources, especially since it doesn’t come with a webui (there is third-party ones like headplane you can use though). RAM usage is like 70mb for me currently.
Had Jellyseer break on me again on Truenas scale, something about a jellyfin API blah blah blah. Decided that Sonarr and Radarr are fine enough to interface with that I don’t need it and deleted the image.
Working on Smart Playlists for Pinepods I’m the developer of the self-hosted podcast management server and Sunday is always my new feature day. I’ve had a lot of fun adding in new features lately. Like designing a homepage and adding OIDC login support. Don’t let anybody tell you auth isn’t fun.
Oh wow, I hadn’t known pinepods! I’ve been looking for a selfhosted podcast management thing for literal years and recently audiobookshelf popped up, but maybe I should check Pinepods instead! I don’t have audiobooks anyways. Are there Android Apps that can be used as a frontend?
Btw, github links to
https://pinepods.online/
, but the website seems to be exclusively available onhttps://www.pinepods.online/
.Edit: Just found you release an Android app as well. I’ll have a look!
Yep! The Android App is somewhat in the works. It compiles, works and has all the features of the web version currently. The things that don’t work are the aspects that make it an actual Android app. Like local device downloads and integration with Android APIs. That stuff is coming. After this next minor release getting to those are my priority.
Not that I’m biased or anything, but Pinepods objectively has WAYYY more features than audiobookshelf for podcast management. Because it’s a podcast server of course. Things like podcast 2.0 support with chapters, hosts, and transcripts, YouTube channel support, embedded podcast Index, YouTube and iTunes search support, external RSS feed support, etc. Audiobookshelf is great, but it’s an audiobook app. It’d probably be clutter to add a lot of this stuff.
And true on the link. It should redirect though.
I’m still using Docker Rootless, which I want to change for Podman since Rootless is second-class for Docker, but I haven’t been able to read the documentation enough to understand Podman Quadlets to migrate my compose files, and there are some incompatible configurations so even if using
podlet
, I have to edit some things manually.I also want to migrate to MicroOS if possible in my server, but I’m still testing things in a VM to understand enough and the cost-benefit u.u
I’ve recently setup an recipe archival project using tandoor, I’m working on converting all my grandparents fading old as dust cooking recipes from their misc handwritten cursive notecards to digital.
Setup was uneventful but it took a little research to figure out how to use a remote postgres server, turns out the app doesn’t give an error when it can’t connect to the server, it just fails to run
Have to say the actual program itself is absolutely absurd and how they choose their permissions, it breaks all conventional and took quite a bit to get used to.