
Monetize Telegram Mini App with Telega.io
Connect your app, set CPM, and watch your revenue grow!
Start monetizing
28.4

Advertising on the Telegram channel «Data Science | Machine Learning for Researchers»
4.6
33
Computer science
Language:
English
1.4K
3
Share
Add to favorite
Channel temporarily not accepting requests
Choose another channel from recommendations or get a tailored list within your budget using AI
AI Channel Picker
Recent Channel Posts
imageImage preview is unavailable
Can your money really work for you—even in a volatile market?
Discover how disciplined investors are building wealth safely with the Wheel Strategy and global ETFs. Get real trade results and weekly lessons—no hype, just proven strategies for growing long-term income.
Start controlling your financial future right here before your next investment decision!
#إعلان InsideAds
491
16:44
24.07.2025
imageImage preview is unavailable
The data NO ONE else has: Infinity is already overtaking Mugen Train—and it’s only Day 5!
I coded this graph myself. Even Demon Slayer fans haven’t seen this.
You’ll never guess what happens next—see the proof here before it’s deleted!
#إعلان InsideAds
463
20:55
24.07.2025
imageImage preview is unavailable
Tired of endless job boards and low offers?
Unlock access to exclusive remote jobs from top startups—some with salaries $100k+ and early-bird roles at $50/h and above.
New high-paying openings posted daily—tech, marketing, design, and more.
Ready to upgrade your career from anywhere?
Check today’s top jobs now before they’re gone!
#إعلان InsideAds
347
06:10
25.07.2025
🔹 Title:
Vision Foundation Models as Effective Visual Tokenizers for Autoregressive Image Generation
🔹 Publication Date: Published on Jul 11
🔹 Abstract:
A novel image tokenizer built on pre-trained vision foundation models improves image reconstruction, generation quality, and token efficiency, enhancing autoregressive generation and class-conditional synthesis. AI-generated summary Leveraging the powerful representations of pre-trained vision foundation models -- traditionally used for visual comprehension -- we explore a novel direction: building an image tokenizer directly atop such models, a largely underexplored area. Specifically, we employ a frozen vision foundation model as the encoder of our tokenizer. To enhance its effectiveness, we introduce two key components: (1) a region-adaptive quantization framework that reduces redundancy in the pre-trained features on regular 2D grids, and (2) a semantic reconstruction objective that aligns the tokenizer's outputs with the foundation model's representations to preserve semantic fidelity. Based on these designs, our proposed image tokenizer , VFMTok , achieves substantial improvements in image reconstruction and generation quality, while also enhancing token efficiency. It further boosts autoregressive (AR) generation -- achieving a gFID of 2.07 on ImageNet benchmarks, while accelerating model convergence by three times, and enabling high-fidelity class-conditional synthesis without the need for classifier-free guidance (CFG). The code will be released publicly to benefit the community.
🔹 Links:
• arXiv Page: https://arxiv.org/abs/2507.08441
• PDF: https://arxiv.org/pdf/2507.08441
🔹 Datasets citing this paper:
No datasets found
🔹 Spaces citing this paper:
No spaces found
==================================
For more data science resources:
✓ https://t.me/DataScienceT
348
08:16
25.07.2025
imageImage preview is unavailable
This channels is for Programmers, Coders, Software Engineers.
0️⃣ Python
1️⃣ Data Science
2️⃣ Machine Learning
3️⃣ Data Visualization
4️⃣ Artificial Intelligence
5️⃣ Data Analysis
6️⃣ Statistics
7️⃣ Deep Learning
8️⃣ programming Languages
✅ https://t.me/addlist/8_rRW2scgfRhOTc0
✅ https://t.me/Codeprogrammer
136
08:35
25.07.2025
imageImage preview is unavailable
Stop wasting time scrolling. Start making money. 💰
With @TaniaTradingAcademy you just copy, paste… and cash out.
No stress. No complicated strategies. Just pure profits.
💥 Anyone can do it. The earlier you join, the faster you win.
🟣 Join the winning side 👉 @TaniaTradingAcademy
#إعلان InsideAds
286
08:54
25.07.2025
🔹 Title:
A Survey of Context Engineering for Large Language Models
🔹 Publication Date: Published on Jul 17
🔹 Abstract:
Context Engineering systematically optimizes information payloads for Large Language Models, addressing gaps in generating sophisticated, long-form outputs. AI-generated summary The performance of Large Language Models (LLMs) is fundamentally determined by the contextual information provided during inference. This survey introduces Context Engineering , a formal discipline that transcends simple prompt design to encompass the systematic optimization of information payloads for LLMs. We present a comprehensive taxonomy decomposing Context Engineering into its foundational components and the sophisticated implementations that integrate them into intelligent systems. We first examine the foundational components: context retrieval and generation, context processing and context management . We then explore how these components are architecturally integrated to create sophisticated system implementations: retrieval-augmented generation (RAG), memory systems and tool-integrated reasoning , and multi-agent systems . Through this systematic analysis of over 1300 research papers, our survey not only establishes a technical roadmap for the field but also reveals a critical research gap: a fundamental asymmetry exists between model capabilities. While current models, augmented by advanced context engineering , demonstrate remarkable proficiency in understanding complex contexts, they exhibit pronounced limitations in generating equally sophisticated, long-form outputs. Addressing this gap is a defining priority for future research. Ultimately, this survey provides a unified framework for both researchers and engineers advancing context-aware AI.
🔹 Links:
• arXiv Page: https://huggingface.co/collections/Maxwell-Jia/daily-arxiv-668d5e8d30bab29956b66b8d
• PDF: https://arxiv.org/pdf/2507.13334
• Github: https://github.com/Meirtz/Awesome-Context-Engineering
🔹 Datasets citing this paper:
No datasets found
🔹 Spaces citing this paper:
No spaces found
==================================
For more data science resources:
✓ https://t.me/DataScienceT
326
10:33
25.07.2025
🔹 Title:
Perception-Aware Policy Optimization for Multimodal Reasoning
🔹 Publication Date: Published on Jul 8
🔹 Abstract:
Perception-Aware Policy Optimization (PAPO) enhances reinforcement learning with verifiable rewards for multimodal reasoning by integrating implicit perception loss, improving visual perception and reasoning. AI-generated summary Reinforcement Learning with Verifiable Rewards (RLVR) has proven to be a highly effective strategy for endowing Large Language Models (LLMs) with robust multi-step reasoning abilities. However, its design and optimizations remain tailored to purely textual domains, resulting in suboptimal performance when applied to multimodal reasoning tasks . In particular, we observe that a major source of error in current multimodal reasoning lies in the perception of visual inputs. To address this bottleneck, we propose Perception-Aware Policy Optimization (PAPO), a simple yet effective extension of GRPO that encourages the model to learn to perceive while learning to reason, entirely from internal supervision signals. Notably, PAPO does not rely on additional data curation, external reward models, or proprietary models. Specifically, we introduce the Implicit Perception Loss in the form of a KL divergence term to the GRPO objective, which, despite its simplicity, yields significant overall improvements (4.4%) on diverse multimodal benchmarks. The improvements are more pronounced, approaching 8.0%, on tasks with high vision dependency. We also observe a substantial reduction (30.5%) in perception errors, indicating improved perceptual capabilities with PAPO. We conduct comprehensive analysis of PAPO and identify a unique loss hacking issue, which we rigorously analyze and mitigate through a Double Entropy Loss . Overall, our work introduces a deeper integration of perception-aware supervision into RLVR learning objectives and lays the groundwork for a new RL framework that encourages visually grounded reasoning . Project page: https://mikewangwzhl.github.io/PAPO.
🔹 Links:
• arXiv Page: https://arxiv.org/abs/2507.06448
• PDF: https://arxiv.org/pdf/2507.06448
• Project Page: https://mikewangwzhl.github.io/PAPO
• Github: https://mikewangwzhl.github.io/PAPO/
🔹 Datasets citing this paper:
• https://huggingface.co/datasets/PAPOGalaxy/PAPO_ViRL39K_train
• https://huggingface.co/datasets/PAPOGalaxy/PAPO_MMK12_test
🔹 Spaces citing this paper:
No spaces found
==================================
For more data science resources:
✓ https://t.me/DataScienceT
490
10:37
25.07.2025
imageImage preview is unavailable
I recommend you to join @TradingNewsIO for Global & Economic News 24/7
⚡️Stay up-to-date with real-time updates on global events.
➡️ Click Here and JOIN NOW !
#إعلان InsideAds
391
15:45
25.07.2025
imageImage preview is unavailable
No skills? No problem. Just copy-paste and GET PAID.
➡️ 22,000+ already started… YOU'RE NEXT! Click here @NPFXSignals
#إعلان InsideAds
1
14:07
26.07.2025
close
New items
Selected
0
channels for:$0.00
Subscribers:
0
Views:
lock_outline
Add to CartBuy for:$0.00
Комментарий