
Get clients in any niche!
Delegate the launch of advertising to us — for free
Learn more
19.4

Advertising on the Telegram channel «Data Science | Machine Learning for Researchers»
4.6
33
Computer science
Language:
English
1.4K
3
Share
Add to favorite
Buy advertising in this channel
Placement Format:
keyboard_arrow_down
- 1/24
- 2/48
- 3/72
- Native
- 7 days
- Forwards
1 hour in the top / 24 hours in the feed
Quantity
%keyboard_arrow_down
- 1
- 2
- 3
- 4
- 5
- 8
- 10
- 15
Advertising publication cost
local_activity
$6.00$6.00local_mall
0.0%
Remaining at this price:0
Recent Channel Posts
imageImage preview is unavailable
The Hundred-Page Language Models Book
Read it:
https://github.com/aburkov/theLMbook
#LLM #NLP #ML #AI #PYTHON #PYTORCH https://t.me/DataScienceM
765
12:13
19.02.2025
imageImage preview is unavailable
Accelerating Data Processing and Benchmarking of AI Models for Pathology
10 Feb 2025 · Andrew Zhang, Guillaume Jaume, Anurag Vaidya, Tong Ding, Faisal Mahmood
Advances in foundation modeling have reshaped computational pathology. However, the increasing number of available models and lack of standardized benchmarks make it increasingly complex to assess their strengths, limitations, and potential for further development. To address these challenges, we introduce a new suite of software tools for whole-slide image processing, foundation model benchmarking, and curated publicly available tasks. We anticipate that these resources will promote transparency, reproducibility, and continued progress in the field.
Paper: https://arxiv.org/pdf/2502.06750v1.pdf
Codes:
https://github.com/mahmoodlab/trident
https://github.com/mahmoodlab/patho-bench
#DataScience #ArtificialIntelligence #MachineLearning #PythonProgramming #DeepLearning #LLM #AIResearch #BigData #NeuralNetworks #DataAnalytics #NLP #AutoML #DataVisualization #ScikitLearn #Pandas #NumPy #TensorFlow #AIethics #PredictiveModeling #GPUComputing #OpenSourceAI #DeepSeek https://t.me/DataScienceT
793
08:01
19.02.2025
imageImage preview is unavailable
Enhance-A-Video: Better Generated Video for Free
11 Feb 2025 · Yang Luo, Xuanlei Zhao, Mengzhao Chen, Kaipeng Zhang, Wenqi Shao, Kai Wang, Zhangyang Wang, Yang You
DiT-based video generation has achieved remarkable results, but research into enhancing existing models remains relatively unexplored. In this work, we introduce a training-free approach to enhance the coherence and quality of DiT-based generated videos, named Enhance-A-Video. The core idea is enhancing the cross-frame correlations based on non-diagonal temporal attention distributions. Thanks to its simple design, our approach can be easily applied to most DiT-based video generation frameworks without any retraining or fine-tuning. Across various DiT-based video generation models, our approach demonstrates promising improvements in both temporal consistency and visual quality. We hope this research can inspire future explorations in video generation enhancement.Paper: https://arxiv.org/pdf/2502.07508v1.pdf Code: https://github.com/NUS-HPC-AI-Lab/Enhance-A-Video
#DataScience #ArtificialIntelligence #MachineLearning #PythonProgramming #DeepLearning #LLM #AIResearch #BigData #NeuralNetworks #DataAnalytics #NLP #AutoML #DataVisualization #ScikitLearn #Pandas #NumPy #TensorFlow #AIethics #PredictiveModeling #GPUComputing #OpenSourceAI #DeepSeek https://t.me/DataScienceT
765
07:17
19.02.2025
imageImage preview is unavailable
✔️ Awesome AI/ML Resources: Learn AI/ML for beginners with a roadmap and free resources.
🖥 https://github.com/armankhondker/awesome-ai-ml-resources
#DataAnalytics #Python #SQL #RProgramming #DataScience #MachineLearning #DeepLearning #Statistics #DataVisualization #PowerBI #Tableau #LinearRegression #Probability #DataWrangling #Excel #AI #ArtificialIntelligence #BigData #DataAnalysis #NeuralNetworks #SupervisedLearning #IBMDataScience #FreeCourses #Certification #LearnDataScience https://t.me/CodeProgrammer 🖥
516
12:21
18.02.2025
imageImage preview is unavailable
PIKE-RAG: sPecIalized KnowledgE and Rationale Augmented Generation
20 Jan 2025 · Jinyu Wang, Jingjing Fu, Rui Wang, Lei Song, Jiang Bian
Despite notable advancements in Retrieval-Augmented Generation (#RAG) systems that expand large language model (LLM) capabilities through external retrieval, these systems often struggle to meet the complex and diverse needs of real-world industrial applications. The reliance on retrieval alone proves insufficient for extracting deep, domain-specific knowledge performing in logical reasoning from specialized corpora. To address this, we introduces PecIalized KnowledgE and Rationale Augmentation Generation (PIKE-RAG), focusing on extracting, understanding, and applying specialized knowledge, while constructing coherent rationale to incrementally steer LLMs toward accurate responses. Recognizing the diverse challenges of industrial tasks, we introduce a new paradigm that classifies tasks based on their complexity in knowledge extraction and application, allowing for a systematic evaluation of RAG systems' problem-solving capabilities. This strategic approach offers a roadmap for the phased development and enhancement of RAG systems, tailored to meet the evolving demands of industrial applications. Furthermore, we propose knowledge atomizing and knowledge-aware task decomposition to effectively extract multifaceted knowledge from the data chunks and iteratively construct the rationale based on original query and the accumulated knowledge, respectively, showcasing exceptional performance across various benchmarks.Paper: https://arxiv.org/pdf/2501.11551v2.pdf Code: https://github.com/microsoft/pike-rag Datasets: HotpotQA - 2WikiMultiHopQA
#DataScience #ArtificialIntelligence #MachineLearning #PythonProgramming #DeepLearning #LLM #AIResearch #BigData #NeuralNetworks #DataAnalytics #NLP #AutoML #DataVisualization #ScikitLearn #Pandas #NumPy #TensorFlow #AIethics #PredictiveModeling #GPUComputing #OpenSourceAI #DeepSeek https://t.me/DataScienceT
974
07:04
18.02.2025
imageImage preview is unavailable
OmniParser for Pure Vision Based GUI Agent
1 Aug 2024 · Yadong Lu, Jianwei Yang, Yelong Shen, Ahmed Awadallah
The recent success of large vision language models shows great potential in driving the agent system operating on user interfaces. However, we argue that the power multimodal models like GPT-4V as a general agent on multiple operating systems across different applications is largely underestimated due to the lack of a robust screen parsing technique capable of: 1) reliably identifying interactable icons within the user interface, and 2) understanding the semantics of various elements in a screenshot and accurately associate the intended action with the corresponding region on the screen. To fill these gaps, we introduce \textsc{OmniParser}, a comprehensive method for parsing user interface screenshots into structured elements, which significantly enhances the ability of #GPT-4V to generate actions that can be accurately grounded in the corresponding regions of the interface. We first curated an interactable icon detection dataset using popular webpages and an icon description dataset. These datasets were utilized to fine-tune specialized models: a detection model to parse interactable regions on the screen and a caption model to extract the functional semantics of the detected elements. \textsc{#OmniParser} significantly improves GPT-4V's performance on ScreenSpot benchmark. And on #Mind2Web and AITW benchmark, \textsc{OmniParser} with screenshot only input #outperforms the GPT-4V baselines requiring additional information outside of screenshot.Paper: https://arxiv.org/pdf/2408.00203v1.pdf Code: https://github.com/microsoft/omniparser Dataset: ScreenSpot Note: Ranked #10 on Natural Language Visual Grounding on #ScreenSpot
#DataScience #ArtificialIntelligence #MachineLearning #PythonProgramming #DeepLearning #LLM #AIResearch #BigData #NeuralNetworks #DataAnalytics #NLP #AutoML #DataVisualization #ScikitLearn #Pandas #NumPy #TensorFlow #AIethics #PredictiveModeling #GPUComputing #OpenSourceAI #DeepSeek https://t.me/DataScienceT
883
06:58
18.02.2025
imageImage preview is unavailable
FlashVideo:Flowing Fidelity to Detail for Efficient High-Resolution Video Generation
7 Feb 2025 · Shilong Zhang, Wenbo Li, Shoufa Chen, Chongjian Ge, Peize Sun, Yida Zhang, Yi Jiang, Zehuan Yuan, Binyue Peng, Ping Luo ·
DiT diffusion models have achieved great success in text-to-video generation, leveraging their scalability in model capacity and data scale. High content and motion fidelity aligned with text prompts, however, often require large model parameters and a substantial number of function evaluations (NFEs). Realistic and visually appealing details are typically reflected in high resolution outputs, further amplifying computational demands especially for single stage DiT models. To address these challenges, we propose a novel two stage framework, #FlashVideo, which strategically allocates model capacity and NFEs across stages to balance generation fidelity and quality. In the first stage, prompt fidelity is prioritized through a low resolution generation process utilizing large parameters and sufficient NFEs to enhance computational efficiency. The second stage establishes flow matching between low and high resolutions, effectively generating fine details with minimal NFEs. Quantitative and visual results demonstrate that FlashVideo achieves state-of-the-art high resolution video generation with superior computational efficiency. Additionally, the two-stage design enables users to preview the initial output before committing to full resolution generation, thereby significantly reducing computational costs and wait times as well as enhancing commercial viability .Paper: https://arxiv.org/pdf/2502.05179v1.pdf Code: https://github.com/foundationvision/flashvideo
#DataScience #ArtificialIntelligence #MachineLearning #PythonProgramming #DeepLearning #LLM #AIResearch #BigData #NeuralNetworks #DataAnalytics #NLP #AutoML #DataVisualization #ScikitLearn #Pandas #NumPy #TensorFlow #AIethics #PredictiveModeling #GPUComputing #OpenSourceAI #DeepSeek https://t.me/DataScienceT
1090
09:38
17.02.2025
follow me on X
i will send useful courses and post
https://x.com/EngSheikho
1068
06:58
17.02.2025
imageImage preview is unavailable
LIMO: Less is More for Reasoning
5 Feb 2025 · Yixin Ye, Zhen Huang, Yang Xiao, Ethan Chern, Shijie Xia, PengFei Liu ·
We present a fundamental discovery that challenges our understanding of how complex reasoning emerges in large language models. While conventional wisdom suggests that sophisticated reasoning tasks demand extensive training data (>100,000 examples), we demonstrate that complex mathematical reasoning abilities can be effectively elicited with surprisingly few examples. Through comprehensive experiments, our proposed model LIMO demonstrates unprecedented performance in mathematical reasoning. With merely 817 curated training samples, LIMO achieves 57.1% accuracy on AIME and 94.8% on #MATH, improving from previous SFT-based models' 6.5% and 59.2% respectively, while only using 1% of the training data required by previous approaches. LIMO demonstrates exceptional out-of-distribution generalization, achieving 40.5% absolute improvement across 10 diverse benchmarks, outperforming models trained on 100x more data, challenging the notion that SFT leads to memorization rather than generalization. Based on these results, we propose the Less-Is-More Reasoning Hypothesis (#LIMO Hypothesis): In foundation models where domain knowledge has been comprehensively encoded during pre-training, sophisticated reasoning capabilities can emerge through minimal but precisely orchestrated demonstrations of cognitive processes. This hypothesis posits that the elicitation threshold for complex reasoning is determined by two key factors: (1) the completeness of the model's encoded knowledge foundation during pre-training, and (2) the effectiveness of post-training examples as "cognitive templates" that show the model how to utilize its knowledge base to solve complex reasoning tasks. To facilitate reproducibility and future research in data-efficient reasoningPaper: https://arxiv.org/pdf/2502.03387v1.pdf Codes: https://github.com/gair-nlp/limo https://github.com/zhaoolee/garss
#DataScience #ArtificialIntelligence #MachineLearning #PythonProgramming #DeepLearning #LLM #AIResearch #BigData #NeuralNetworks #DataAnalytics #NLP #AutoML #DataVisualization #ScikitLearn #Pandas #NumPy #TensorFlow #AIethics #PredictiveModeling #GPUComputing #OpenSourceAI #DeepSeek https://t.me/DataScienceT
1407
06:56
16.02.2025
imageImage preview is unavailable
Accelerating Data Processing and Benchmarking of AI Models for Pathology
10 Feb 2025 · Andrew Zhang, Guillaume Jaume, Anurag Vaidya, Tong Ding, Faisal Mahmood ·
Advances in foundation modeling have reshaped computational pathology. However, the increasing number of available #models and lack of standardized benchmarks make it increasingly complex to assess their strengths, limitations, and potential for further development. To address these challenges, we introduce a new suite of software tools for whole-slide image processing, foundation model benchmarking, and curated publicly available tasks. We anticipate that these resources will promote transparency, reproducibility, and continued progress in the field.Paper: https://arxiv.org/pdf/2502.06750v1.pdf Codes: https://github.com/mahmoodlab/trident https://github.com/mahmoodlab/patho-bench
#DataScience #ArtificialIntelligence #MachineLearning #PythonProgramming #DeepLearning #LLM #AIResearch #BigData #NeuralNetworks #DataAnalytics #NLP #AutoML #DataVisualization #ScikitLearn #Pandas #NumPy #TensorFlow #AIethics #PredictiveModeling #GPUComputing #OpenSourceAI #DeepSeek https://t.me/DataScienceT
1487
15:51
15.02.2025
close
Reviews channel
keyboard_arrow_down
- Added: Newest first
- Added: Oldest first
- Rating: High to low
- Rating: Low to high
4.6
1 reviews over 6 months
Very good (100%) In the last 6 months
e
**ikpelli@********.com
On the service since January 2025
07.01.202516:57
4
ad has been placed
Show more
New items
Channel statistics
Rating
19.4
Rating reviews
4.6
Сhannel Rating
82
Followers:
27.4K
APV
lock_outline
ER
2.8%
Posts per day:
3.0
CPM
lock_outlineSelected
0
channels for:$0.00
Followers:
0
Views:
lock_outline
Add to CartBuy for:$0.00
Комментарий