@EastonMan 看的新闻
+碎碎念
+膜大佬
+偶尔猫猫
+伊斯通听的歌
Facebook 发布了 OpenZL,一个可以通过学习文件格式的结构同时优化压缩率、压缩速度、解压速度的算法。

在使用时,程序员可以编写对于文件结构的描述,并生成/训练特定于这一文件格式的压缩算法,使得可以通过文件本身内部结构生成更容易压缩的数据流。所有这些压缩后的数据流都可以共享同一个解压算法,压缩算法改变时不需要修改。当输入无特定结构时,算法回退到 zstd。
https://engineering.fb.com/2025/10/06/developer-tools/openzl-open-source-format-aware-compression-framework/ Introducing OpenZL: An Open Source Format-Aware Compression Framework
Daniel Lemire's blog
std::ranges may not deliver the performance that you expect

source
航线连接大脑 节油代替思考
Chips and Cheese
AMD's Inaugural Tech Day ft. ROCm 7, Modular, and AMD Lab Tour
#ChipAndCheese

Hello you fine Internet folks,

I was invited to AMD's Austin Headquarters where AMD held their inaugural Tech Day where AMD announced ROCm 7, Modular showed off their results with MI355X, which was topped off by a tour of AMD's Labs.

Hope y'all enjoy!

iframe (www.youtube-nocookie.com)

If you like the content then consider heading over to the Patreon or PayPal if you want to toss a few bucks to Chips and Cheese. Also consider joining the Discord.

source
(author: George Cozma)
入门文章系列:

在探讨大语言模型(LLM)的性能时,一个流传已久的说法是:“解码过程中的 Attention 操作是访存密集型(Memory Bound)的。” 这个观点深入人心,以至于许多优化讨论都以此为前提。然而,随着模型架构的演进和解码策略的创新,这一迷思正在被打破。

https://shinezyy.github.io/ArchShineZ/post/decoding-myth/ The myth of decoding large language models | ShineZ's Homepage
dramforever's blog
dram.page is now served by grebedoc.dev

That’s it. If you’re seeing this post, your receiving this page from grebedoc.dev.

Check it out if you want a way to serve the contents from a git repo over HTTP(S) on the public Internet.

The backend server on grebedoc.dev is called git-pages. I’ll probably use the two names interchangably for the rest of this article.

(However, I will use “grebedoc.dev” to specifically refer to the domain itself, and use grebedoc.dev to refer to the service.)

Zero downtime migration

I think I’ve managed to do a zero-downtime migration from Netlify. This is what I did:

First, I moved everything to the pages branch. This is just easier.

Then, I set up the “method 1” DNS record:
_git-pages-repository.dram.page.  600  IN  TXT  (
    "https://github.com/dramforever/dram.page.git" )

Then, I triggered an initial clone using the PUT method.
$ curl -v -X PUT -H 'Host: dram.page' \
    'https://grebedoc.dev' \
    --data 'https://github.com/dramforever/dram.page.git'

This has two effects:

It asks git-pages to start serving files for Host: dram.page
It registers dram.page as a known domain to git-pages, allowing it to immediately start receiving HTTPS requests. This lets me to completely avoid sending anything over unencrypted HTTP, including the webhook.

Now I can see if it’s serving my pages correctly:
$ curl -v -H 'Host: dram.page' 'https://grebedoc.dev'

For me, that looked good, so I’m ready to do the switch, changing the actual DNS record to make my site point to the grebedoc.dev server:
dram.page.  600  IN  ALIAS  grebedoc.dev

… well, ALIAS is not a real DNS record. The effect is that the authoritative DNS server looks up the IPv4 and IPv6 address of the domain grebedoc.dev and serves them as A and AAAA records of dram.page. Fortunately my authoritative DNS service has this feature.

And that’s it, I’m serving the latest site on grebedoc.dev.

As for auto-updating, the webhook part works normally.

What’s wrong with Netlify?

Nothing, really, but since I use zero complex features and just serve what’s in the git repo, this is just simpler.

source
中国防火长城(GFW)今日发生史上最大规模的内部文档泄漏。超过500GB的源代码、工作日志与内部交流记录外泄,揭示了GFW的研发与运作细节。

泄漏源自GFW核心研发力量之一的积至公司(首席科学家方滨兴)及中科院信息工程研究所第二研究室的处理架构组 MESA Lab。该公司不仅为新疆、江苏、福建等地政府提供服务,还在“一带一路”框架下向缅甸、巴基斯坦、阿塞俄比亚、哈萨克斯坦等国输出审查与监控技术。

该泄漏事件意义重大且深远,由于资料体量庞大,GFW Report 将持续分析并在此页面更新:

https://gfw.report/blog/geedge_and_mesa_leak/zh/
Back to Top