Change Log
New Features
1. Yeedu AssistantX (AI Assistant for SQL and Python Notebooks)
AI-powered assistant that provides intelligent suggestions, code completions, and guidance directly inside SQL and Python notebooks.
2. Hive Metastore Integration
- Create and manage Hive Metastore directly from the UI
- Supports Basic and Kerberos authentication
- Hive Metastore files supported across all cluster types
3. Catalog Explorer
Browse and manage Databricks Unity and Hive catalogs.
Key Benefit: Simplifies dataset discovery and governance.
4. Interactive Widgets in Notebooks
Add sliders, text inputs, and other widgets for parameterization.
Key Benefit: Enables dynamic, user-driven workflows.
5. Spot Instance / VM Support
Provision spot instances with basic failover handling.
Key Benefit: Improves cost efficiency and resilience.
6. Append Cluster Configurations to Jobs
Append cluster configuration directly to job configuration for greater flexibility.
7. Application ID in Python Notebooks
Direct access to Spark UI from within Python notebooks.
8. Temporary Token Access
Temporary tokens available inside jobs/notebooks to securely call APIs or use dbutils.
9. Cluster Detachment Support
Detach clusters from jobs or notebooks when they are not in a running state.
Key Benefit: Enables flexible resource management and cleanup of unused associations.
10. Custom Disk Configuration
Configure custom IOPS and throughput for disks across supported cloud providers.
Enhancements
- Clusters UX Improvements – Refined cluster management workflows for smoother interaction
- File Tree in Notebooks – File navigation with quick actions: Download, Copy Path, Open in New Tab
- Improved Log File Handling – Optimized processing for large log files using file size information
- Micro-Interactions – Improved hover effects, animations, and responsiveness
- Jupyter Startup Enhancements – Added
application.pyandtransform_percent_to_cell.pyas default startup scripts - Dependency Management Flexibility – Optional dependency repository configuration during cluster creation
- SQL Magic File Improvements – Updated SQL execution logic for better reliability and usability
- Dbutils Automation – Automated installation of dbutils in clusters for consistent availability
- Advanced Cluster Search & Filtering – Filter clusters by type, machine type, Spark version, and cloud provider
- Partial Log Retrieval – Preview or download partial logs by specifying size or line count
- Cluster Bootstrap Time – Added bootstrap buffer time to support minimum idle timeout configuration
- Append Parameter in Job and Notebook Configuration – Merge user-provided settings with defaults or apply only defaults
- Reduced Cluster Auto-shutdown Threshold – Minimum auto-shutdown reduced from 10 minutes to 1 minute for cost control
- Case-insensitive Search Support – All searches now support case-insensitive queries for improved usability
Bug Fixes
- Fixed notebook prompts to save changes when code is already saved
- Resolved code leakage between notebooks opened in the same tab
- Fixed WebSocket connection issue keeping cells and run-all button in interrupt state
- Corrected notebook cells showing “Running” status after execution completed
- Kernel now updates correctly after restart instead of remaining idle
- Removed WebSocket retry message after Spark session stop
- Fixed “Cannot find Jupyter URL” error in UI
- Updated query logic to retrieve Turbo values when catalog is not enabled
- Improved Docker installation support on Ubuntu 20.04
- More reliable root filesystem handling in Azure environments
- Enhanced GCP instance existence checks across all states
- Stability improvements for Telegraf to prevent memory errors
- AWS region detection updated to use
get_aws_region - Improved Turbo JAR copy conditions to avoid unnecessary errors
Change in Naming Conventions
We’ve standardized field names across APIs and UI to improve consistency, simplify integrations, and align backend with UI.
API Changes
job_conf_id→job_id– Update job creation & retrieval API integrationsjob_id→run_id– Update job run scriptsjob_status→run_status– Update monitoring/alerting logicnotebook_conf_id→notebook_id– Update notebook API callsnotebook_id→run_id– Replace in automation scripts
UI Changes
notebook_config_id→notebook_id– Update UI-based reports & dashboardsnotebook_conf_count→notebook_count– Update usage analyticsjob_conf_count→job_count– Update job count referencesnotebook_conf_details→notebook_details– Align detail view queriesjob_conf_details→job_details– Update job detail panelsnotebook_id→run_id– Update UI API scriptsnotebook_status→run_status– Update monitoring dashboards