xylogx

@xylogx@lemmy.world

This profile is from a federated server and may be incomplete. View on remote instance

xylogx ,

I use alt browsers like DDG and Brave that have builtin ad blocking. I also have a pihole.

xylogx ,

Yes they are available on iOS.

xylogx OP ,

Not anymore according to Wikipedia:

SteamOS, version 3.0. This new version is based upon Arch Linux with the KDE Plasma 5 desktop environment

xylogx OP ,

Good question! After installing Emulators on my Steamdeck I realized it could run as a desktop. Also, I learned it was a rolling release. This seemed attractive to me, so I wanted to hear how mainstream this could be.

Sounds like the answer is not very. Some other good suggestions in this thread I might try, though.

xylogx ,

Who are some people to follow on Bookwyrm?

Google Tries to Defend Its Web Environment Integrity as Critics Slam It as Dangerous ( techreport.com )

Attacks and doxing make me personally MORE likely to support stronger safety features in chromium, as such acts increase my suspicion that there is significant intimidation from criminals who are afraid this feature will disrupt their illegal and/or unethical businesses, and I don’t give in to criminals or bullies...

xylogx ,

What exactly is the attestation checking? As far as I can tell it is a TPM assertion possibly that you have secure boot enables and that the browser has not been tampered with. Is there anything else? I looked in the Github page but alls that I saw was placeholders. Is this documented somewhere?

xylogx ,

That makes a lot of sense. Not sure how that would work on Windows where users typically run with admin credentials. Yes, I cannot modify the boot loader, but with admin credentials I can do many malicious things to your traffic in between the browser and the OS, up to and including attaching a debugger to your browser process to see kernel memory.

I know it is possible for Linux to pass secure boot in some cases, so in theory it could be possible for there to attestation on Linux systems, but this suffers from the same flaw as Windows since users have root access.

In the end the only thing this will do is prevent someone from using curl or cli tools to access a site that requires attestation. Will this prevent bots? I am not certain. You could in effect guarantee a 1-1 relationship of users to TPM/Secure Enclaves. This would slow down bot farmers, but not stop them.

Chinese bot farm with 100’s of physical smartphones -> youtu.be/aSESD6rm54o

xylogx ,

It’s pretty bad. You are going to be vulnerable to password spraying at the very least and a phishing email or credential leak, both incredibly common, will result in a bad day.

You need MFA and preferably FIDO based MFA with conditional access.

xylogx , (edited )

Great article, thank you for sharing!

So if I understand, Wiz is saying some apps that use Azure AD might not have sufficient logging to identify the IOCs. But MS apps like Exchange Online and Teams do have sufficient logging?

xylogx ,

So if you get pulled over and are identified as being an illegal immigrant and have a license for one of the states, you could be given a ticket for driving without a license. The question is, how could they know that you are an illegal immigrant?

Community-driven open-souce LLM

I’m looking for an open-source alternative to ChatGPT which is community-driven. I have seen some open-source large language models, but they’re usually still made by some organizations and published after the fact. Instead, I’m looking for one where anyone can participate: discuss ideas on how to improve the model, write...

xylogx ,

Have a look at this paper from MS research -> microsoft.com/…/orca-progressive-learning-from-co…

“ Recent research has focused on enhancing the capability of smaller models through imitation learning, drawing on the outputs generated by large foundation models (LFMs). A number of issues impact the quality of these models, ranging from limited imitation signals from shallow LFM outputs; small scale homogeneous training data; and most notably a lack of rigorous evaluation resulting in overestimating the small model’s capability as they tend to learn to imitate the style, but not the reasoning process of LFMs. To address these challenges, we develop Orca, a 13-billion parameter model that learns to imitate the reasoning process of LFMs. Orca learns from rich signals from GPT 4 including explanation traces; step-by-step thought processes; and other complex instructions, guided by teacher assistance from ChatGPT. To promote this progressive learning, we tap into large-scale and diverse imitation data with judicious sampling and selection. Orca surpasses conventional state-of-the-art instruction-tuned models such as Vicuna-13B by more than 100% in complex zero-shot reasoning benchmarks like Big-Bench Hard (BBH) and 42% on AGIEval. Moreover, Orca reaches parity with ChatGPT on the BBH benchmark and shows competitive performance (4 pts gap with optimized system message) in professional and academic examinations like the SAT, LSAT, GRE, and GMAT, both in zero-shot settings without CoT; while trailing behind GPT–4. Our research indicates that learning from step-by-step explanations, whether these are generated by humans or more advanced AI models, is a promising direction to improve model capabilities and skills.”

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • All magazines