
- Main
- Catalog
- Computer science
- Coding interview preparation
Coding interview preparation
Preparing programmers for coding interviews. Addressing questions asked by major global programming companies. Cool programming resources.
Channel statistics
What is Technical Debt, and is it e#ver actually acceptable to take it on?✅ Answer:
I would define Technical Debt as the implied cost of future rework caused by choosing an easy or "quick and dirty" solution now instead of a better, more robust approach that would take longer to implement. It’s a trade-off between speed and quality. In the short term, taking on technical debt is absolutely acceptable if it’s a conscious, strategic decision. For example, when building an MVP (Minimum Viable Product) to validate a business idea, or when a critical security patch needs to be deployed immediately, "perfect" code can be the enemy of survival. In these cases, we "borrow" time from the future to meet a pressing deadline today. Next, the key to managing this debt is visibility and tracking. I would ensure that any shortcuts taken are documented as "Tech Debt tickets" in the backlog. If debt is hidden or forgotten, it becomes "accidental debt," which is much harder to manage than "deliberate debt." We need to treat it like a financial loan, you can carry it for a while, but you must be aware of the "interest" (the extra time it takes to build new features on top of messy code). Long term, I would advocate for scheduled repayment cycles. This means dedicating a percentage of every sprint (e.g., 10-20%) or holding specific "Refactoring Sprints" to pay down the debt. If left unaddressed, technical debt leads to "Software Rot," where the system becomes so brittle and complex that the team's velocity drops to near zero, effectively reaching "technical bankruptcy.
How would you explain the difference between a Process and a Thread?✅ Answer:
I would define them based on their relationship to resources and execution. A Process is an independent program in execution with its own dedicated memory space (stack, heap, and registers). Because processes are isolated, they don't interfere with each other, but communicating between them (IPC) is resource-heavy.
A Thread is a "lightweight" unit of execution that lives inside a process. Multiple threads share the same memory space and resources of their parent process. This makes communication between threads very fast and efficient, but it also introduces risks like "Race Conditions," where two threads try to modify the same data at the same time.
In short: A process is the container, while threads are the workers inside that container. If a process crashes, it doesn't affect others; but if a thread crashes in a way that corrupts shared memory, it can take down the entire process.{}
Bias is error from overly simple assumptions (Underfitting). The model misses patterns. Variance is error from over-sensitivity to noise in training data (Overfitting). The model fails on new data. Strategy: I aim for the "sweet spot" where both errors are minimized. I use Cross-Validation to monitor this balance. If bias is high, I increase model complexity. If variance is high, I use regularization or more data.❔How do you choose between Precision and Recall? ✅ Answer:
It depends on the cost of a mistake. Precision matters when "False Alarms" are expensive (e.g., marking a safe email as Spam). Recall matters when "Missing a Case" is dangerous (e.g., failing to detect Cancer). Strategy: I use the F1-Score if I need a balance between the two. For business stakeholders, I always translate these into "Lost Revenue" or "Customer Trust" to make the trade-off clear.❔What is the difference between Random Forest and XGBoost? ✅ Answer:
Random Forest uses "Bagging", it builds many independent trees in parallel and averages them. It’s hard to overfit and works great out of the box. XGBoost uses "Boosting", it builds trees sequentially, where each new tree tries to fix the errors of the previous one. Strategy: I start with Random Forest as a robust baseline. I move to XGBoost (or LightGBM) when I need maximum accuracy and have the time to fine-tune hyperparameters.❔ How do you handle a dataset where 99% of labels are 'Class A' and only 1% are 'Class B'? ✅ Answer:
First, I stop using Accuracy as a metric, as a "dumb" model would be 99% accurate by just guessing 'Class A'. I switch to AUPRC or Confusion Matrices. Techniques: I use Resampling (Oversampling the minority or Undersampling the majority). I also use "Class Weights" in the model settings to penalize mistakes on the 1% more heavily.❔What is the purpose of PCA (Principal Component Analysis)? ✅ Answer:
PCA is a dimensionality reduction tool. It transforms many correlated features into a smaller set of uncorrelated variables called "Principal Components" while keeping as much variance (information) as possible. Usage: I use it to speed up training and reduce "Noise." Caution: It makes features hard to interpret. If the business needs to know why a prediction was made, I avoid PCA and use Feature Selection instead.
Why is testing important if the code already works?✅ Answer:
Code that appears to work for known cases may still fail under edge conditions, future changes, or unexpected inputs. Testing provides confidence that the system behaves correctly across scenarios and helps prevent regressions.
Automated tests also improve maintainability by allowing developers to refactor safely and catch issues early in the development cycle.
In production systems, testing is less about proving the code works once and more about ensuring it continues to work as the system evolves.{}
I would start by analyzing the 'Critical Rendering Path' using tools like Lighthouse or Chrome DevTools to identify the biggest bottlenecks, whether it's large images, render-blocking JavaScript, or slow server response times. Next, I would implement immediate 'Quick Wins' on the frontend. This includes compressing and converting images to modern formats like WebP, minifying CSS/JS files, and using "Lazy Loading" so that off-screen images only load when the user scrolls to them. I would also move non-essential scripts to the end of the HTML or use async/defer attributes. Long term, I would look at architectural improvements. This might involve implementing a Content Delivery Network (CDN) to serve assets closer to the user, setting up efficient browser caching strategies, and potentially moving from a standard Client-Side Rendering (CSR) model to Server-Side Rendering (SSR) or Static Site Generation (SSG) to ensure the first paint happens as fast as possible.❔What is CORS, and how do you resolve 'CORS Errors' in production? ✅ Answer:
I would explain that CORS (Cross-Origin Resource Sharing) is a browser security feature that prevents a web page from making requests to a different domain than the one that served it. It’s the browser's way of protecting users from malicious scripts. In the short term, to fix a CORS error, I would configure the backend headers. I’d ensure the server sends the Access-Control-Allow-Origin header specifying the exact domain of the frontend. During development, a proxy can be used, but for production, the server must explicitly whitelist the allowed origins. Long term, I would ensure the security strategy is robust. Instead of using wildcards (*), which are dangerous, I would implement a dynamic whitelist. I would also ensure that sensitive "pre-flight" requests (OPTIONS) are handled correctly by the server and that any authentication cookies are managed using the SameSite and Secure attributes to maintain a high security posture while allowing cross-origin functionality.❔When should you choose Server-Side Rendering (SSR) over Static Site Generation (SSG)? ✅ Answer:
I would base the decision on how often the data changes. I’d use SSG (Static Site Generation) for pages where the content is the same for every user and doesn't change frequently like a blog, a documentation site, or a marketing page. SSG is the fastest option because the HTML is built once at build time. I would choose SSR (Server-Side Rendering) for pages that need to show real-time, user-specific data, such as a personalized dashboard or a search results page. Since the server generates a fresh HTML page for every request, the user always sees the most up-to-date information, which is also great for SEO on dynamic pages. Long term, I would look into Hybrid approaches like Incremental Static Regeneration (ISR). This allows us to keep the speed of SSG but update specific static pages in the background as data changes, giving us the best of both worlds, performance and freshness, without taxing the server on every single hit.
Reviews channel
13 total reviews
- Added: Newest first
- Added: Oldest first
- Rating: High to low
- Rating: Low to high
Catalog of Telegram Channels for Native Placements
Coding interview preparation is a Telegram channel in the category «Интернет технологии», offering effective formats for placing advertising posts on TG. The channel has 5.8K subscribers and provides quality content. The advertising posts on the channel help brands attract audience attention and increase reach. The channel's rating is 30.8, with 13 reviews and an average score of 5.0.
You can launch an advertising campaign through the Telega.in service, choosing a convenient format for placement. The Platform provides transparent cooperation conditions and offers detailed analytics. The placement cost is 7.8 ₽, and with 32 completed requests, the channel has established itself as a reliable partner for advertising on Telegram. Place integrations today and attract new clients!
You will be able to add channels from the catalog to the cart again.
Комментарий