I like to code, garden and tinker

  • 0 Posts
  • 70 Comments
Joined 7 months ago
cake
Cake day: February 9th, 2024

help-circle



  • Fluent in finance is just another forum that says it’s your fault your poor. They say you don’t play the game right and they may be right for a rigged game. But the fact is you shouldn’t be required to play a game to get whats your fair share, and fluent in finance just says you didn’t invest right and didn’t setup your future right to live off the backs of other workers.

    The rest is just hyperbolic headlines which drive engagement which is the cancer of any social media platform. No one makes a billion dollars in income as defined by the US tax code, they make a billion dollars in equity which can be used to back loans which is part of the whole issue of obscuring cash flow. Then they can just use this as fodder to call anyone supporting this idiots cause “No one makes a billion dollars a year” when we know they do, it’s just accounted differently.


  • My question would be, why do you need a more powerful server? Are you monitoring your load and seeing it’s overloaded often? Are you just looking to be able to hook more drives to it? Do you need to re-encode video on the fly for other devices? Giving some more details would help someone to give a more insightful answer. I personally am using a Raspberry Pi 4, Chromebox w/ an i7, an old HP rack server, and an old desktop PC for my self hosting needs, as this is cheaper than buying all new hardware (though the electricity bill isn’t the greatest haha, but oh well). If you are just looking for more storage, using the USB 3.0 slots on the Raspberry Pi 4b you can add a couple extra SSDs using a NVMe to USB 3.0 enclosure. For most purposes the speeds will be fine for most applications.

    As for SSD vs HDD, SSD hands down. The only reason you’d pick an HDD is if your trying to get more storage cheaper and don’t mind a higher rate of failure. If your data is at all valuable, and it almost always is, redundancy should be added as well.

    And as for running Linux, if it can’t run Linux I wouldn’t want to own it.

    Edit: Fixed typo


  • This might help, sorry if it doesn’t, but here is a link to CloudFlares 5xx error code page on error 521. If you’ve done everything in the resolution list your ISP might be actively blocking you from hosting websites, as it is generally against the ISPs ToS to do such on residential service lines. This is why I personally rent a VPS and have a wireguard VPN setup to host from the VPN, which is basically just a roll your own version of Tailscale using any VPS provider. This way you don’t need to expose anything via your ISPs router/WAN and they can’t see what you are sending or which ports you are sending on (other than the encrypted VPN traffic to your VPS of course).



  • I’ve never ran this program, but skimmed the documentation. You should be able to use the SHIORI_DIR (or a custom database table following those instructions) along with the -p argument for launching the web interface. A simple bash script that should work:

    export SHIORI_DIR=/path/to/shiori-data-dir
    shiori serve -p 8081
    

    To run multiple versions, I’d suggest setting up each instance as a service on your machine in case of reboots and/or crashes.

    Now for serving them, you have two options. The first is just let the users connect to the port directly, but this is generally not done for outward facing services (not that you can’t). The second is to setup a reverse proxy and route the traffic through subdomains or subpaths. Nginx is my go-to solution for this. I’ve also heard good things about Caddy. You’ll most likely have to use subdomains for this, as lots of apps assume they are the root path without some tinkering.

    Edit: Corrected incorrect cli arguments and a typo.



  • If you are expecting a more windows-like experience, I would suggest using Ubuntu or Kubuntu (or any other distro using Gnome/KDE), as these are much closer to a modern Windows GUI. With Ubuntu, I can use the default file manager (nautilus) and do Ctrl+F and filter files via *.ext, then select these files then cut and paste to a new folder (drag and drop does not seem to work from the search results). In Kubuntu, the search doesn’t recognize * as a wildcard in KDE’s file manager (dolphin) but does support drag/drop between windows.


  • In my humble opinion, we too are simply prediction machines. The main difference is how efficient our brains are at the large number of tasks given for it to accomplish for it’s size and energy requirements. No matter how complex the network is it is still a mapped outcome, just the number of factors weighed is extremely large and therefore gives a more intelligent response. You can see this with each increment in GPT models that use larger and larger parameter sets giving more and more intelligent answers. The fact we call these “hallucinations” shows how effective the predictive math is, and mimics humans abilities to just make things up on the fly when we don’t have a solid knowledge base to back it up.

    I do like this quote from the linked paper:

    As we will discuss, we find interesting evidence that simple sequence prediction can lead to the formation of a world model.

    That is to say, you don’t need complex solutions to map complex problems, you just need to have learned how you got there. It’s never purely random attempts at the problem, it’s always predictive attempts that try to map the expected outcomes and learn by getting it right and wrong.

    At this point, it seems fair to conclude the crow is relying on more than surface statistics. It evidently has formed a model of the game it has been hearing about, one that humans can understand and even use to steer the crow’s behavior.

    Which is to say that it has a predictive model based on previous games. This does not mean it must rigidly follow previous games, but that by playing many games it can see how each move affects the next. This is a simpler example because most board games are simpler than language with less possible outcomes. This isn’t to say that the crow is now a grand master at the game, but it has the reasoning to understand possible next moves, knows illegal moves, and knows to take the most advantageous move based on it’s current model. This is all predictive in nature, with “illegal” moves being assigned very low probability based on the learned behavior the moves never happen. This also allows possible unknown moves that a different model wouldn’t consider, but overall provides what is statistically the best move based on it’s model. This allows the crow to be placed into unknown situations, and give an intelligent response instead of just going “I don’t know this state, I’ll do something random”. This does not always mean this prediction is correct, but it will most likely be a valid and more than not statistically valid move.

    Overall, we aren’t totally sure what “intelligence” is, we are just an organism that has developed more and more capabilities to process information based on a need to survive. But getting down to it, we know neurons take inputs and give outputs based on what it perceives is the best response for the given input, and when enough of these are added together we get “intelligence”. In my opinion it’s still all predictive, its how the networks are trained and gain meaning from the data that isn’t always obvious. It’s only when you blindly accept any answer as correct that you run into these issues we’ve seen with ChatGPT.

    Thank you for sharing the article, it was an interesting article and helped clarify my understanding of the topic.


  • Disclaimer: I am not an AI researcher and just have an interest in AI. Everything I say is probably jibberish, and just my amateur understanding of the AI models used today.

    It seems these LLM’s use a clever trick in probability to give words meaning via statistic probabilities on their usage. So any result is just a statistical chance that those words will work well with each other. The number of indexes used to index “tokens” (in this case words), along with the number of layers in the AI model used to correlate usage of these tokens, seems to drastically increase the “intelligence” of these responses. This doesn’t seem able to overcome unknown circumstances, but does what AI does and relies on probability to answer the question. So in those cases, the next closest thing from the training data is substituted and considered “good enough”. I would think some confidence variable is what is truly needed for the current LLMs, as they seem capable of giving meaningful responses but give a “hallucinated” response when not enough data is available to answer the question.

    Overall, I would guess this is a limitation in the LLMs ability to map words to meaning. Imagine reading everything ever written, you’d probably be able to make intelligent responses to most questions. Now imagine you were asked something that you never read, but were expected to respond with an answer. This is what I personally feel these “hallucinations” are, or imo best approximations of the LLMs are. You can only answer what you know reliably, otherwise you are just guessing.


  • Looking over the github issues I couldn’t find a feature request for this, so it seems like it’s not being considered at the moment. You could make a suggestion over there, I do think this feature would be useful but it’s up to the devs to implement it.

    That being said, I wouldn’t count on this feature being implemented. This will only work on instances that obey the rules so some instances could remove this feature. When you look up your account on my instance (link here), it is up to my server to respect your option to hide your profile comments. This means the options have to be federated per-user, and adds a great deal of complexity to the system that can be easily thwarted by someone running an instance that chooses to not follow these rules.

    If your goal is to stop people looking up historical activities, it might be best to use multiple accounts and switch to new accounts every so often to break up your history. You could also delete your content but this is again up to each instance to respect the deletion request. It’s not an optimal solutions but depending on your goals it is the available solution.

    Edit: Also if your curious about the downvotes, it’s not the subject matter but your post violates Rule 3: Not regarding using or support for Lemmy.




  • From Time (link: https://time.com/6133336/jan-6-capitol-riot-arrests-sentences/)

    So far, the median prison sentence for the Jan. 6 rioters is 60 days, according to TIME’s calculation of the public records.

    An additional 113 rioters have been sentenced to periods of home detention, while most sentences have included fines, community service and probation for low-level offenses like illegally parading or demonstrating in the Capitol, which is a misdemeanor.

    Overall these people are getting less time than kids who get caught with some weed on them.

    You can think you have rights, or you can know your rights, but when you violate the law don’t be surprised when one of the most pro-incarceration states around throw you in jail. Lots of protesters get arrested and prosecuted as a scare tactic. This is if you are assuming these people didn’t have seditious intentions, which does change things a bit. Overall sounds like they fucked around and found out, at least protesters fighting for real causes are more prepared to get fucked with by the state than these jokers.




  • As for the data transfer costs, any network data originating from AWS that hits an external network (an end user or another region) typically will incur a charge. To quote their blog post:

    A general rule of thumb is that all traffic originating from the internet into AWS enters for free, but traffic exiting AWS is chargeable outside of the free tier—typically in the $0.08–$0.12 range per GB, though some response traffic egress can be free. The free tier provides 100GB of free data transfer out per month as of December 1, 2021.

    So you won’t be charged for incoming federated content, but serving content to the end user will count as traffic exiting AWS. I am not sure of your exact setup (AWS pricing is complex) but typically this is charged. This is probably negligible for a single-user instance, but I would be careful serving images from your instance to popular instances as this could incur unexpected costs.


  • These are just my opinions on the matter at hand.

    TLDR; it’s not all about growing as massive as possible and letting everyone talk to everyone. It’s about communities being able to make choices for their user base and the freedom to choose who to federate with. It’s also about users having a choice of which instance they use to interact with the fediverse, and with whom. Having Meta involved limits these choices in not so obvious ways.

    Doesn’t the fediverse have an inherent protection and/or immunity from corporate take-over?

    Yes, but that does not mean it is invulnerable. Take the World Wide Web as an example, over the past couple of decades the decentralized web has become increasingly centralized. Projects such as Lemmy and Mastodon are a shot back at this trend, to try and break the web up as it was. Each instance gets to decide if letting large corporations federate with them is the best choice or not. It seems that a lot do not want this, and this is exactly the kind of protection from corporate take over that is inherent. The more large central servers are allowed to take a central role, the more power they will gain to snuff small communities and instances. They will do this by fragmenting users bases and communities over time, or any other dirty tricks they can come up with.

    Also, having billions of dollars at your disposal is known to increase your influence overall. They can outspend anyone to sell most people on how Threads is interconnected and fediverse friendly, if you let them sell that lie they will win in time. They’ll do this, pull the rug and say how other independent instances aren’t corporating. They will shut off access to these communities in one way or another and begin the process of centralization. It has happened before, and will happen again.

    Aren’t we protected?

    If you choose to not use Threads, you are not giving your information directly to Meta. But, that does not mean you are safe. Meta is a corporation, and will try to pull whatever tricks they can to take over as the dominate player. They are going head to head with Twitter, what makes you think instances a fraction of Twitters size are safe?

    Also, saying we are isolated by our individual instances is a bit humorous as they are federated. If one instance pushes most of the content is that really isolated? What about upvotes, engagement and any other activity that is pushed to other servers via the ActivityPub protocol? These will all be taken in by Meta, which means you are feeding them activity. Sure it’s safer, but they are still getting more data by engaging in the ActivityPub protocol than they get via scrapping pages. Also, they don’t have to play fair with the ActivityPub protocol, there are a lot of dirty tricks that could be used to hamper content on other instances than their own.

    Is there anything currently stopping Meta from scraping the Fediverse for our content?

    No, and the fediverse should not care. The goal of the fediverse at the moment is to stay independent and have a user base that is not reliant on a single entity and to stay away from the influence of corporate interests. If you operate in a public space, someones always going to be able to see it. It’s all about who owns that public space.

    Won’t we grow & educate?

    Who is we? Users that value their freedom will stay in the independent fediverse instances. Those who are looking for a twitter alternative will probably go to Threads. Those who don’t care will probably stay on Twitter. Any of these users might have multiple accounts on some or all of these services. Trying to group this together as “we” is a bit disingenuous.

    As for growth, it’s not safe to assume that independent instances will grow because of the federation of users from Threads. Users that are on Threads are likely to stay on Threads, users that join instances are likely to stay there. Look to linux users to see why you aren’t going to convert many over the virtues of freedom and decentralization, you’ll just become another “fanboy”.

    Aren’t we worried we’re forcing an ultimatum while the Fediverse is still in its infancy?

    What is the ultimatum? This is a pretty loaded question, since some of the fediverse is already fractured. The fact you can spin up your own instance, invite whoever you want and keep the interests of your community out of the hands of corporations is the goal. Freedom to host your own community. Anything else is just having a capitalist mindset on growth, the line doesn’t always have to go up. Getting the most users isn’t the end game, it’s having a community that you belong to and feel a part of.

    What’s the harm in pulling the ripcord if we try it, and it’s truly not a good fit?

    Each instance chooses what is best for their community. Being a part of the mainstream content feed isn’t the goal of most of these decentralized communities.

    “What about an influx of low-quality content?”

    Why do instances need to let users block Meta when they know their users want Meta blocked? What’s stopping users from going to an instance that doesn’t block Meta if their instance disagrees with their opinion? It’s all about doing what instances communities want, or users can migrate if they feel their needs aren’t being met.

    “What if Meta doesn’t moderate well?”

    Meta will probably be able to moderate for their advertisers better than most instance operators will be able to. But again, it’s not about moderation and sanitizing content for advertiser revenue, it’s about having a space that is for the community by the community. It doesn’t need to be a single homogeneous community so ads can sell. Some of us want that outside of a corporations control, others don’t or don’t care, all are valid. Thankfully, everyone has a choice instead of being forced to do one or the other.