Search results for "BIT"
Today
06:01

BIT: Ethereum bullish call option selling pressure rising, volatility continues to trend downward

On April 6, BIT released a report stating that market trading volumes have shrunk. Bitcoin ETFs saw net inflows of $1.3 billion, while Ethereum continued to experience outflows. Geopolitical risks have not dissipated, and put-option strategies are increasingly favored. The Ethereum options market has shown significant changes, with traders choosing to sell call options more often to earn premiums.
More
ETH3,57%
BTC2,72%
08:23

BIT brand makes its first appearance after the upgrade at the "Trust in Digital Finance" industry event held in Singapore.

BIT hosted the "Trust in Digital Finance" event in Singapore, discussing governance, compliance, and operational issues in the digital asset industry. The co-founder emphasized that the approval of a Bitcoin ETF will promote the institutionalization process, with increased market demands for infrastructure. The event revolved around the "Trust White Paper," discussing frameworks for risk governance and implementation. BIT, as a digital asset financial services group, is committed to bridging traditional and digital finance, with assets under management exceeding $6 billion.
More
BTC2,72%
RWA2,09%
02:32

Google Releases TurboQuant Algorithm: 3-bit KV Cache Quantization With No Accuracy Loss, Inference Speed Boosted Up to 8x

Google Research has released the TurboQuant algorithm, which can compress the KV cache of large language models to 3 bits, reducing memory usage by at least 6 times while maintaining accuracy without requiring training. The algorithm optimizes traditional quantization through two sub-algorithms: PolarQuant and QJL. Testing shows excellent performance across multiple long-context benchmarks.
More
13:08

Tether introduces the BitNet LoRA framework, supporting large model training on mobile devices

Gate News report: On March 17, Tether's QVAC Fabric launched the world's first cross-platform LoRA fine-tuning framework for Microsoft BitNet (1-bit LLM), significantly lowering the VRAM and computational thresholds for large model training. The framework supports LoRA fine-tuning and inference acceleration on Intel, AMD, Apple Silicon M series, and mobile GPUs (including Adreno, Mali, and Apple Bionic).
More