Nope. I don’t talk about myself like that.

  • 0 Posts
  • 20 Comments
Joined 2 years ago
cake
Cake day: June 8th, 2023

help-circle

  • Then yes, you’d probably be fine with any competent minipc and your favorite flavor of firewall… I would recommend OPNSense personally, but there’s others out there that I’m sure would meet your needs.

    Just about any decent minipc can handle 1gbps from what I’ve seen a few years ago. You need much bigger horses to get up to 10gbps. But wouldn’t know what the minimum specs would be… I’ve been stuck in the higher end world for a while… So that information has kind of vanished from my memory… Someone else can chime in? I suspect the little baby n150 units could probably do 1gbps. Especially since you’re only doing minimal throughput on your wireguard as well (I have a few nodes and can push into 1gbps, so once again I’m resource heavy… and thus don’t have the lower requirements committed to memory anymore).

    ISP -> ARRIS modem -> minipc -> Switch -> anything else you need including access points.

    All of the “routers” that have wifi and a boatload of ports (unless we’re talking enterprise stuff) are all hybrid devices that are router+switch+AP, this is convenient for typical consumers, but quite restrictive for those who want to go prosumer or higher. For example… Wifi 7 just released last year. I swapped my AP out and now I have it. I can also mount that AP into the ceiling where it will give me the best coverage. Rather than the consumer answers of “replace the whole unit” or “add a shitton of mesh nodes that ultimately kind of suck” solutions that manufacturers love cause you spend more money on their products. Or other answers like you want to add a PoE device… well now that consumer unit is useless to you.


  • We’re missing crucial information.

    What bandwidth do you get from your ISP? Do you want to run things like IDS/IPS? what kind of throughput do you want from wireguard?

    What it takes to connect a 100/10 DOCSIS based service is completely different to a 1/100 service is completely different to an 8/8gbps fiber service.

    You said wireguard on the modem… your modem shouldn’t be doing any routing of tunnels at all. I’m almost suspecting that you don’t know what the difference between a router and modem is because of this “misspeak”. If you don’t, you need to go watch some networking basics youtube videos and get a firm understanding before you commit to buying stuff that you have no idea what you’re doing with.

    In my case, I’m blessed with 8/8 fiber. I have a full fancy supermicro server running opnsense. 10gbps on the wan side, 40gbps on the lan side for multiple vlans (about a dozen). It’s overkill because my ISP offers it… but that means that the “router” I’m using to use the 8gbps is also ~$2k cost to do it. With big bandwidth comes big processing overhead if you want to do any form of protection and tunneling (VPN or SDN).

    You shouldn’t really care how many interfaces your router has outside of potentially doing LACP sort of redundancy. Use a switch to get more ports for your devices.


  • Sure, but my point is that it’s no different to an AUR/user repo. At some point you’re just trusting someone else.

    I think the whole “Don’t put bash scripts into a terminal” is too broad. It’s the same risk factor as any blind trust in ANY repository. If you trust the repo then what does it matter if you install the program via repo or bash script. It’s the same. In this specific case though, I trust the repo pretty well. I’ve read well more than half of the lines of code I actually run. When tteck was running it… he was very very sensitive about what was added and I had 100% faith in it. Since the community took it over after his death it seems like we’re still pretty well off… but it’s been growing much faster than I can keep up with.

    But none of these issues are any different than installing from AUR.

    The rule should just be “don’t run shit from untrusted sources” which could include AUR/repo sources.



  • Eh… I have my own repo that pulls the PVE repo and updates a bunch of things to how I want them to be and then runs a local version of the main page. While I don’t stare at every update they make… There’s likely enough of us out there looking at the scripts that we’d sound some alarms if something off was happening.






  • Saik0@lemmy.saik0.comtoSelfhosted@lemmy.worldInvidious selfhosted
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    5 days ago

    I have a self-hosted instance running for about the past 3-4 years. But pulling new PO Tokens isn’t working anymore so my instance is kind of broken right now.

    To be frank, it’s unlikely you’ll get a running instance operational at this point unless something changes.

    Edit: You have to rotate IP addresses when the PO Token problem happens. But it’s a gamble if the next IP you get from your ISP will be allowed by Youtube.


  • 18-24 credit hour semesters… and summer courses when available.

    Since I knew virtually most of the program going in, course load in general was stupidly easy to manage. But I would not recommend it unless you really know the material/subject matter.

    Edit: there was heavy incentive… GI bill pays for 4 years of schooling. I have a few months left of that 4 year period left of my GI bill… But if I didn’t take everything accelerated, I couldn’t get the masters. So I just went full ham on the curriculum.


  • Well I don’t mean to harp on it… Plex in this instance is much better off. When provided proof of the problem they fixed it. Jellyfin has had issues about this going back to 2019… 6 years ago. Still no fix in sight. And the first ticket I linked proved the concept can be abused. With the issues getting hidden because “We’re closing this because we’re consolidating… oh wait… we’re closing it because we’re splitting the issues out.” I’ve legit had people tell me that the problems were fixed because they saw the issue closed.

    And now I hear that JF is even deprecating SSL and mandating proxy or esoteric custom config to implement SSL themselves again… Seems they’re going backwards?

    I had Jellyfin setup for just myself because I’d love to get away from the risk of Plex screwing shit up (and to get off their SSO). But the frustration of the dev responses to some of these issues and the fact that I’m literally the only person who’s able to deal with the restrictions needed to keep it secure… I just turned it off. I didn’t want to deal with managing two systems because my kids/wife/other family couldn’t figure out how to use it.


  • others are about media access

    Yup, and these are the biggest risks IMO. I find the well organized, big media companies with deep pockets and a few basic scripts that we know to work to be the biggest vector of liability.

    https://github.com/jellyfin/jellyfin/issues/1501
    https://github.com/jellyfin/jellyfin/issues/5415#issuecomment-2071798575 (and the following comments)
    https://github.com/jellyfin/jellyfin/issues/13984

    A person’s biggest threat running Jellyfin is going to be the media companies themselves. Sony (the company known for installing rootkits on people’s computers) can pre-hash a list of their movies with commonly config’d locations/name schemas for their content and enumerate your system for if you have their content. Since you don’t have any authentication on the endpoint, they’re likely not violating any law through circumvention. The “random UUID” is just the MD5 hash of the path/filename. So it’s actually highly guessable… especially for people using default docker configs and *arr stacks and you normalize names using these tools.

    Their response was “this attack isn’t in the wild”(as if they actually know… running a script and checking a few hundred thousand requests to go through a list of movies isn’t all that taxing and users won’t even notice it to report it… let alone have enough logging to notice it to begin with) and “it breaks compatability, so we don’t want to do it”. Which I find laughable. It turned me off from Jellyfin all together.

    Edit: And because every time I bring up the issue I get downvoted for “fear mongering”… There are answers to resolve it… you need to use non-standard naming schemes in your files/folder structure and fail2ban. But that expects users to do that… And I could do that… but it’s a security risk non-the-less and the developers response to the risk being what it is is what’s scary to me.

    Edit2: The LDAP one… I should clarify I don’t care about that one since well… requires you to additionally config stuff that most users won’t. But the media exposure issues are default and universal and require setting things “non-standard” to have any protection from, which users generally WON’T do.






  • Check again…

    That’s “free space”. The 414 represents ~62% of my space (38% used). I’m at just under 700 for usable space. And 2 disks are out of the array at the moment because the backplane went stupid. Turns out that it’s a pain in the ass to open server chassis to replace a backplane when you have to unmount 70 disks. And I’m pretty lazy.

    6 x RAIDZ2 | 10 wide | 14.55 TiB
    873TiB raw.
    960TB raw.

    This graph might be better…