Skip to main content
Version: v2.9.1

Change Log

New Features

1. Yeedu AssistantX (AI Assistant for SQL and Python Notebooks)

AI-powered assistant that provides intelligent suggestions, code completions, and guidance directly inside SQL and Python notebooks.

2. Hive Metastore Integration

  • Create and manage Hive Metastore directly from the UI
  • Supports Basic and Kerberos authentication
  • Hive Metastore files supported across all cluster types

3. Catalog Explorer

Browse and manage Databricks Unity and Hive catalogs.
Key Benefit: Simplifies dataset discovery and governance.

4. Interactive Widgets in Notebooks

Add sliders, text inputs, and other widgets for parameterization.
Key Benefit: Enables dynamic, user-driven workflows.

5. Spot Instance / VM Support

Provision spot instances with basic failover handling.
Key Benefit: Improves cost efficiency and resilience.

6. Append Cluster Configurations to Jobs

Append cluster configuration directly to job configuration for greater flexibility.

7. Application ID in Python Notebooks

Direct access to Spark UI from within Python notebooks.

8. Temporary Token Access

Temporary tokens available inside jobs/notebooks to securely call APIs or use dbutils.

9. Cluster Detachment Support

Detach clusters from jobs or notebooks when they are not in a running state.
Key Benefit: Enables flexible resource management and cleanup of unused associations.

10. Custom Disk Configuration

Configure custom IOPS and throughput for disks across supported cloud providers.


Enhancements

  1. Clusters UX Improvements – Refined cluster management workflows for smoother interaction
  2. File Tree in Notebooks – File navigation with quick actions: Download, Copy Path, Open in New Tab
  3. Improved Log File Handling – Optimized processing for large log files using file size information
  4. Micro-Interactions – Improved hover effects, animations, and responsiveness
  5. Jupyter Startup Enhancements – Added application.py and transform_percent_to_cell.py as default startup scripts
  6. Dependency Management Flexibility – Optional dependency repository configuration during cluster creation
  7. SQL Magic File Improvements – Updated SQL execution logic for better reliability and usability
  8. Dbutils Automation – Automated installation of dbutils in clusters for consistent availability
  9. Advanced Cluster Search & Filtering – Filter clusters by type, machine type, Spark version, and cloud provider
  10. Partial Log Retrieval – Preview or download partial logs by specifying size or line count
  11. Cluster Bootstrap Time – Added bootstrap buffer time to support minimum idle timeout configuration
  12. Append Parameter in Job and Notebook Configuration – Merge user-provided settings with defaults or apply only defaults
  13. Reduced Cluster Auto-shutdown Threshold – Minimum auto-shutdown reduced from 10 minutes to 1 minute for cost control
  14. Case-insensitive Search Support – All searches now support case-insensitive queries for improved usability

Bug Fixes

  1. Fixed notebook prompts to save changes when code is already saved
  2. Resolved code leakage between notebooks opened in the same tab
  3. Fixed WebSocket connection issue keeping cells and run-all button in interrupt state
  4. Corrected notebook cells showing “Running” status after execution completed
  5. Kernel now updates correctly after restart instead of remaining idle
  6. Removed WebSocket retry message after Spark session stop
  7. Fixed “Cannot find Jupyter URL” error in UI
  8. Updated query logic to retrieve Turbo values when catalog is not enabled
  9. Improved Docker installation support on Ubuntu 20.04
  10. More reliable root filesystem handling in Azure environments
  11. Enhanced GCP instance existence checks across all states
  12. Stability improvements for Telegraf to prevent memory errors
  13. AWS region detection updated to use get_aws_region
  14. Improved Turbo JAR copy conditions to avoid unnecessary errors

Change in Naming Conventions

We’ve standardized field names across APIs and UI to improve consistency, simplify integrations, and align backend with UI.

API Changes

  1. job_conf_idjob_id – Update job creation & retrieval API integrations
  2. job_idrun_id – Update job run scripts
  3. job_statusrun_status – Update monitoring/alerting logic
  4. notebook_conf_idnotebook_id – Update notebook API calls
  5. notebook_idrun_id – Replace in automation scripts

UI Changes

  1. notebook_config_idnotebook_id – Update UI-based reports & dashboards
  2. notebook_conf_countnotebook_count – Update usage analytics
  3. job_conf_countjob_count – Update job count references
  4. notebook_conf_detailsnotebook_details – Align detail view queries
  5. job_conf_detailsjob_details – Update job detail panels
  6. notebook_idrun_id – Update UI API scripts
  7. notebook_statusrun_status – Update monitoring dashboards