
Get clients in any niche!
Delegate the launch of advertising to us — for free
Learn more
18.4

Advertising on the Telegram channel «Data Science | Machine Learning for Researchers»
4.6
33
Computer science
Language:
English
1.4K
3
Share
Add to favorite
Buy advertising in this channel
Placement Format:
keyboard_arrow_down
- 1/24
- 2/48
- 3/72
- Native
- 7 days
- Forwards
1 hour in the top / 24 hours in the feed
Quantity
%keyboard_arrow_down
- 1
- 2
- 3
- 4
- 5
- 8
- 10
- 15
Advertising publication cost
local_activity
$6.00$6.00local_mall
0.0%
Remaining at this price:0
Recent Channel Posts
imageImage preview is unavailable
🚀 LunaProxy - The Most Cost-effective Residential Proxy Exclusive Benefits for Members of This Group: 💥 Residential Proxy: As low as $0.77 / GB. Use the discount code [lunapro30] when placing an order and save 30% immediately. ✔️ Over 200 million pure IPs | No charge for invalid ones | Success rate > 99.9% 💥 Unlimited Traffic Proxy: Enjoy a discount of up to 72%, only $79 / day. ✔️ Unlimited traffic | Unlimited concurrency | Bandwidth of over 100Gbps | Customized services | Save 90% of the cost when collecting AI/LLM data Join the Luna Affiliate Program and earn a 10% commission. There is no upper limit for the commission, and you can withdraw it at any time.
👉 Take action now: https://www.lunaproxy.com/?ls=data&lk=?01
166
06:40
28.04.2025
فرصة عمل عن بعد 🧑💻
لا يتطلب اي مؤهل او خبره الشركه تقدم تدريب كامل ✨
ساعات العمل مرنه ⏰
يتم التسجيل ثم التواصل معك لحضور لقاء تعريفي بالعمل والشركه
https://forms.gle/hqUZXu7u4uLjEDPv8
547
20:10
27.04.2025
play_circleVideo preview is unavailable
🌼 SOTA Textured 3D-Guided VTON 🌼
👉 #ALIBABA unveils 3DV-TON, a novel diffusion model for HQ and temporally consistent video. Generating animatable textured 3D meshes as explicit frame-level guidance, alleviating the issue of models over-focusing on appearance fidelity at the expense of motion coherence. Code & benchmark to be released 💙
👉 Review: https://t.ly/0tjdC
👉 Paper: https://lnkd.in/dFseYSXz
👉 Project: https://lnkd.in/djtqzrzs
👉 Repo: TBA
#AI #3DReconstruction #DiffusionModels #VirtualTryOn #ComputerVision #DeepLearning #VideoSynthesis
https://t.me/DataScienceT 🔗
829
08:57
27.04.2025
imageImage preview is unavailable
This channels is for Programmers, Coders, Software Engineers.
0️⃣ Python
1️⃣ Data Science
2️⃣ Machine Learning
3️⃣ Data Visualization
4️⃣ Artificial Intelligence
5️⃣ Data Analysis
6️⃣ Statistics
7️⃣ Deep Learning
8️⃣ programming Languages
✅ https://t.me/addlist/8_rRW2scgfRhOTc0
✅ https://t.me/Codeprogrammer
634
12:50
23.04.2025
play_circleVideo preview is unavailable
NVIDIA introduces Describe Anything Model (DAM)
a new state-of-the-art model designed to generate rich, detailed descriptions for specific regions in images and videos. Users can mark these regions using points, boxes, scribbles, or masks.
DAM sets a new benchmark in multimodal understanding, with open-source code under the Apache license, a dedicated dataset, and a live demo available on Hugging Face.
Explore more below:
Paper: https://lnkd.in/dZh82xtV
Project Page: https://lnkd.in/dcv9V2ZF
GitHub Repo: https://lnkd.in/dJB9Ehtb
Hugging Face Demo: https://lnkd.in/dXDb2MWU
Review: https://t.ly/la4JD
#NVIDIA #DescribeAnything #ComputerVision #MultimodalAI #DeepLearning #ArtificialIntelligence #MachineLearning #OpenSource #HuggingFace #GenerativeAI #VisualUnderstanding #Python #AIresearch https://t.me/DataScienceT ✅
1339
10:08
23.04.2025
Follow me on linkedin (important for you)
https://www.linkedin.com/in/hussein-sheikho-4a8187246
175
12:27
21.04.2025
imageImage preview is unavailable
Liquid: Language Models are Scalable Multi-modal Generators
5 Dec 2024 · Junfeng Wu, Yi Jiang, Chuofan Ma, Yuliang Liu, Hengshuang Zhao, Zehuan Yuan, Song Bai, Xiang Bai · We present Liquid, an auto-regressive generation paradigm that seamlessly integrates visual comprehension and generation by tokenizing images into discrete codes and learning these code embeddings alongside text tokens within a shared feature space for both vision and language. Unlike previous multimodal large language model (MLLM), Liquid achieves this integration using a single large language model (LLM), eliminating the need for external pretrained visual embeddings such as CLIP. For the first time, Liquid uncovers a scaling law that performance drop unavoidably brought by the unified training of visual and language tasks diminishes as the model size increases. Furthermore, the unified token space enables visual generation and comprehension tasks to mutually enhance each other, effectively removing the typical interference seen in earlier models. We show that existing LLMs can serve as strong foundations for Liquid, saving 100x in training costs while outperforming Chameleon in multimodal capabilities and maintaining language performance comparable to mainstream LLMs like LLAMA2. Liquid also outperforms models like SD v2.1 and SD-XL (FID of 5.47 on MJHQ-30K), excelling in both vision-language and text-only tasks. This work demonstrates that LLMs such as LLAMA3.2 and GEMMA2 are powerful multimodal generators, offering a scalable solution for enhancing both vision-language understanding and generation. The code and models will be released at https://github.com/FoundationVision/Liquid.Paper: https://arxiv.org/pdf/2412.04332v2.pdf Code: https://github.com/foundationvision/liquid https://t.me/DataScienceT 🖕
1490
09:49
21.04.2025
imageImage preview is unavailable
REPA-E: Unlocking VAE for End-to-End Tuning with Latent Diffusion Transformers
14 Apr 2025 · Xingjian Leng, Jaskirat Singh, Yunzhong Hou, Zhenchang Xing, Saining Xie, Liang Zheng · In this paper we tackle a fundamental question: "Can we train latent diffusion models together with the variational auto-encoder (VAE) tokenizer in an end-to-end manner?" Traditional deep-learning wisdom dictates that end-to-end training is often preferable when possible. However, for latent diffusion transformers, it is observed that end-to-end training both VAE and diffusion-model using standard diffusion-loss is ineffective, even causing a degradation in final performance. We show that while diffusion loss is ineffective, end-to-end training can be unlocked through the representation-alignment (REPA) loss -- allowing both VAE and diffusion model to be jointly tuned during the training process. Despite its simplicity, the proposed training recipe (REPA-E) shows remarkable performance; speeding up diffusion model training by over 17x and 45x over REPA and vanilla training recipes, respectively. Interestingly, we observe that end-to-end tuning with REPA-E also improves the VAE itself; leading to improved latent space structure and downstream generation performance. In terms of final performance, our approach sets a new state-of-the-art; achieving FID of 1.26 and 1.83 with and without classifier-free guidance on ImageNet 256 x 256. Code is available at https://end2end-diffusion.github.io.Paper: https://arxiv.org/pdf/2504.10483v1.pdf Code: https://github.com/End2End-Diffusion/REPA-E Dataset: ImageNet https://t.me/DataScienceT ✅
1436
09:39
20.04.2025
📢 5-Day Generative AI Intensive Course with #Google is now available as a self-paced Learn Guide!
Access whitepapers, podcasts, code labs, & recorded livestreams. Additionally, there is a bonus assignment for you!
https://www.kaggle.com/learn-guide/5-day-genai
#GenerativeAI #GoogleAI #AICourse #SelfPacedLearning #MachineLearning #DeepLearning #Kaggle #AICommunity #TechEducation #AIforEveryone⚡️ BEST DATA SCIENCE CHANNELS ON TELEGRAM 🌟
1070
18:09
19.04.2025
play_circleVideo preview is unavailable
🔥 General Attention-Based Object Detection 🔥
👉 GATE3D is a novel framework designed specifically for generalized monocular 3D object detection via weak supervision. GATE3D effectively bridges domain gaps by employing consistency losses between 2D and 3D predictions.
👉 Review: https://t.ly/O7wqH
👉 Paper: https://lnkd.in/dc5VTUj9
👉 Project: https://lnkd.in/dzrt-qQV
#3DObjectDetection #Monocular3D #DeepLearning #WeakSupervision #ComputerVision #AI #MachineLearning #GATE3D
⚡️ BEST DATA SCIENCE CHANNELS ON TELEGRAM 🌟
1536
07:45
18.04.2025
close
Reviews channel
keyboard_arrow_down
- Added: Newest first
- Added: Oldest first
- Rating: High to low
- Rating: Low to high
4.6
1 reviews over 6 months
Very good (100%) In the last 6 months
e
**ikpelli@********.com
On the service since January 2025
07.01.202516:57
4
ad has been placed
Show more
New items
Channel statistics
Rating
18.4
Rating reviews
4.6
Сhannel Rating
82
Subscribers:
29.1K
APV
lock_outline
ER
1.9%
Posts per day:
2.0
CPM
lock_outlineSelected
0
channels for:$0.00
Subscribers:
0
Views:
lock_outline
Add to CartBuy for:$0.00
Комментарий