WAT!? No internet!?
WAT!? No internet!?
Reminded my of what happened at the MindTheTech conference half a year ago.
https://peervideo.club/w/p/i4BetLY7RZa5yeNLJriXPW?playlistPosition=3
Netscape Communicator, Netscape Communicator, KHTML, Netscape Communicator
It seems that we focus our interest in two different parts of the problem.
Finding the most optimal way to classify which images are best compressed in bulk is an interesting problem in itself. In this particular problem the person asking it had already picked out similar images by hand and they can be identified by their timestamp for optimizing a comparison of similarity. What I wanted to find out was how well the similar images can be compressed with various methods and codecs with minimal loss of quality. My goal was not to use it as a method to classify the images. It was simply to examine how well the compression stage would work with various methods.
Wait… this is exactly the problem a video codec solves. Scoot and give me some sample data!
I was not talking about classification. What I was talking about was a simple probe at how well a collage of similar images compares in compressed size to the images individually. The hypothesis is that a compression codec would compress images with similar colordistribution in a spritesheet better than if it encode each image individually. I don’t know, the savings might be neglible, but I’d assume that there was something to gain at least for some compression codecs. I doubt doing deduplication post compression has much to gain.
I think you’re overthinking the classification task. These images are very similar and I think comparing the color distribution would be adequate. It would of course be interesting to compare the different methods :)
The first thing I would do writing such a paper would be to test current compression algorithms by create a collage of the similar images and see how that compares to the size of the indiviual images.
Desktop Applications
Intervene? I don’t think they have that kind of power ;)
Thank you.
Interesting that they’ll make it a user choice. Who would answer yes?
On 22 July 2024, Google announced that it is changing its approach to Privacy Sandbox. Instead of removing third-party cookies from Chrome, it will be introducing a user-choice prompt, which will allow users to choose whether to retain third party cookies.
Do you have a source for that excus… uehm… claim?
Unless someone has registered the trademark for those specific purposes you’re clear. A trademarks is only valid within a specific field of purpose. Trademarks are there to avoid consumers mistaking one brand for another.
There are a lot of entertaining articles on Techdirt about companies not understanding trademark law.
I agree and the requirement for an exact placement of attribution is not very friendly to derivate works either. I don’t think that section 7 of AGPL allow adding anything other than the exact terms in section 7 and it has a clause that allow removing non-permissive additions to the AGPL, but I’ve sent an e-mail to FSF asking what their position is. I would be very concerned picking AGPL as a license for my projects, if section 7 allow adding clauses like that. Anyhow the clauses were added in this commit, so anything prior to 7.3.0 is normal AGPL.
There is no free and open source version of Only Office. It fakes that it is licensed with AGPL, but they have added the following to the license, which in effect completely forbid you to redistribute it. It can be said to be Source Available.
The interactive user interfaces in modified source and object code versions of ONLYOFFICE must display Appropriate Legal Notices, as required under Section 5 of the GNU AGPL version 3.
Pursuant to Section 7 § 3(b) of the GNU AGPL you must retain the original ONLYOFFICE logo in the upper left corner of the user interface when distributing the software.
Pursuant to Section 7 § 3(e) we decline to grant you any rights under trademark law for use of our trademarks.
https://raw.githubusercontent.com/ONLYOFFICE/DesktopEditors/master/LICENSE
You need to use a dmix PCM for you card as output.
If you type aplay -L | grep dmix
it’ll show you a list of dmix devices. You can set one as the default if you create a file named .asoundrc
in your homefolder with the content:
pcm.!default {
type plug
slave.pcm "dmix:CARD=Set,DEV=0"
}
You of course replace the value of slave.pcm with your desired card name. I just gave one of mine as an example. The above default configuration also takes care of automatic conversion, via the plug
pcm, for different samplerates and formats to the settings the hardware is set up to use. Every program that use ALSA for output will read the above file, but you need to restart a program for changes to take effect.
If you enjoy audio production I’m sure you’ll find some good use for Jack, but for audio mixing all you need is to use an ALSA dmix pcm for output.
A solution I’ve used for the glibc problem, is to build on an older distribution in a chroot. There is also this project which might be of use to pick a specific version of glibc. The project README also explain how to do it manually.
As for distribution, I prefer something like makeself.sh, that installs to either ~/.local/ or if it is to be installed system-wide to /usr/local or /opt. The concept is just a small shell script appended with a compressed archive, it is easy to modify and even create by hand using standard tools like cat. This is a method widely used by native Linux games.
Nice, but it is not entirely without JS. There is a tracking script from scorecardresearch.com
debian/rules:
dh_auto_configure -- -DWITH_TESTS=$(WITH_TESTS) \
-DWITH_GUI_TESTS=$(WITH_TESTS) \
-DWITH_XC_UPDATECHECK=OFF \
-DWITH_XC_ALL=OFF
CMakeLists.txt:
set(WITH_XC_ALL OFF CACHE BOOL "Build in all available plugins")
option(WITH_XC_AUTOTYPE "Include Auto-Type." ON)
option(WITH_XC_NETWORKING "Include networking code (e.g. for downloading website icons)." OFF)
option(WITH_XC_BROWSER "Include browser integration with keepassxc-browser." OFF)
option(WITH_XC_BROWSER_PASSKEYS "Passkeys support for browser integration." OFF)
option(WITH_XC_YUBIKEY "Include YubiKey support." OFF)
option(WITH_XC_SSHAGENT "Include SSH agent support." OFF)
option(WITH_XC_KEESHARE "Sharing integration with KeeShare" OFF)
option(WITH_XC_UPDATECHECK "Include automatic update checks; disable for controlled distributions" ON)
if(UNIX AND NOT APPLE)
option(WITH_XC_FDOSECRETS "Implement freedesktop.org Secret Storage Spec server side API." OFF)
endif()
option(WITH_XC_DOCS "Enable building of documentation" ON)
set(WITH_XC_X11 ON CACHE BOOL "Enable building with X11 deps")
# stuff inbetween cut out
if(WITH_XC_ALL)
# Enable all options (except update check and docs)
set(WITH_XC_AUTOTYPE ON)
set(WITH_XC_NETWORKING ON)
set(WITH_XC_BROWSER ON)
set(WITH_XC_BROWSER_PASSKEYS ON)
set(WITH_XC_YUBIKEY ON)
set(WITH_XC_SSHAGENT ON)
set(WITH_XC_KEESHARE ON)
if(UNIX AND NOT APPLE)
set(WITH_XC_FDOSECRETS ON)
endif()
endif()
I’m no CMake expert, but it looks like to me, from the first line of the above snippet, that the default in the upstream build script is WITH_XC_ALL=OFF.
It’s easy to overlook with the omnipresent internet, but self-hosting doesn’t require internet. You could host for your fellow students on the local network. If that’s also against the Wifi rules you can either ignore that stupid rule or set up your own god damn wifi with hostapd on your machine and let students connect directly to it. It’s probably best to use a machine dedicated to the task for security reasons as you wouldn’t want curious students to accidentally erase your homework. I wouldn’t use containers or VMs for any of this, I’d just use bare metal like in the good ol’ days. You could also, without having to worry, give people shell accounts because it’s a closed network. The options are endless without all the worries of hosting on the internet.