docs/src/main/sphinx/release/release-405.md
EXPLAIN. ({issue}15317)EXPLAIN ANALYZE VERBOSE. ({issue}15286)EXPLAIN ANALYZE. ({issue}15286)ALTER COLUMN ... SET DATA TYPE statement. ({issue}11608)resource-groups.refresh-interval configuration property. ({issue}14514)date columns with
timestamp(n) with time zone literals. ({issue}5798)14718, {issue}14874)INSERT queries when fault-tolerant execution is
enabled. ({issue}14735)GROUP BY clauses. ({issue}15292)15369)node-scheduler.max-pending-splits-per-task configuration property
to node-scheduler.min-pending-splits-per-task. ({issue}15168)14459)time(n) and time(n) with time zone values near
the top of the range of allowed values. ({issue}15138)PARTITION BY clause followed by the evaluation of window functions with a
PARTITION BY and ORDER BY clause. ({issue}15203)interval from a
timestamp with time zone. ({issue}15103)15334)MATCH_RECOGNIZE. ({issue}15343)Projection CPU time in the output of EXPLAIN ANALYZE VERBOSE. ({issue}15364)SET TIME ZONE LOCAL to correctly reset to the initial time zone of the
client session. ({issue}15314)14962)14008)system.metadata tables improperly showing the names of catalogs
which the user cannot access. ({issue}14000)USE statement improperly disclosing the names of catalogs and schemas
which the user cannot access. ({issue}14208)15336)15339)NULLABLE columns of the
DatabaseMetaData.getColumns result. ({issue}15214)bigquery.experimental.arrow-serialization.enabled catalog configuration
property. ({issue}14972)bigquery.project-id
catalog property. ({issue}14083)11609)parquet.max-read-block-row-count configuration property or the
parquet_max_read_block_row_count session property. ({issue}15474)vacuum
procedure on S3-compatible storage. ({issue}15072)INSERT, MERGE, and
CREATE TABLE ... AS SELECT queries. ({issue}14407)boolean, tinyint,
short, int, long, float, double, short decimal, UUID, time,
decimal, varchar, and char data types. This optimization can be disabled
with the parquet.optimized-reader.enabled catalog configuration property. ({issue}14423, {issue}14667)nulls fraction statistic is not available
for some columns. ({issue}15132)15257, {issue}15474)15268)DROP TABLE performance for tables stored on AWS S3. ({issue}13974)timestamp and
timestamp with timezone data types. ({issue}15204)15168)15374)register_table procedure. ({issue}13568)delta.legacy-create-table-with-existing-location.enabled
configuration property or the
legacy_create_table_with_existing_location_enabled session property. ({issue}13568)5729)DROP TABLE leaving files behind when using managed tables stored on S3
and created by the Databricks runtime. ({issue}13017)15183)INSERT failure for tables stored on S3. ({issue}15476)gsheets.read-timeout
configuration property. ({issue}15322)base64-encoded credentials using the
gsheets.credentials-key configuration property. ({issue}15477)credentials-path configuration property to
gsheets.credentials-path, metadata-sheet-id to
gsheets.metadata-sheet-id, sheets-data-max-cache-size to
gsheets.max-data-cache-size, and sheets-data-expire-after-write to
gsheets.data-cache-ttl. ({issue}15042)UNIONTYPE Hive
type. ({issue}15278)parquet.max-read-block-row-count configuration property or the
parquet_max_read_block_row_count session property. ({issue}15474)INSERT, MERGE, and CREATE TABLE AS SELECT
queries. ({issue}14407)boolean, tinyint,
short, int, long, float, double, short decimal, UUID, time,
decimal, varchar, and char data types. This optimization can be disabled
with the parquet.optimized-reader.enabled catalog configuration property. ({issue}14423, {issue}14667)15241, {issue}15066)15257, {issue}15474)15268)DROP TABLE performance for tables stored on AWS S3. ({issue}13974)timestamp and
timestamp with timezone data types. ({issue}15204)15168)15374)14673)5729)schema already exists error caused by a client timeout when
creating a new schema. ({issue}15174)14746)INSERT failure on ORC ACID tables when Apache Hive 3.1.2 is used as a
metastore. ({issue}7310)char types. ({issue}15470)INSERT failure for tables stored on S3. ({issue}15476)boolean, tinyint,
short, int, long, float, double, short decimal, UUID, time,
decimal, varchar, and char data types. This optimization can be disabled
with the parquet.optimized-reader.enabled catalog configuration property. ({issue}14423, {issue}14667)15268)timestamp and
timestamp with timezone data types. ({issue}15204)15168)15374)5729)parquet.max-read-block-row-count configuration property or the
parquet_max_read_block_row_count session property. ({issue}15474)13294)INSERT, MERGE, and CREATE TABLE AS SELECT
queries. ({issue}14407)boolean, tinyint,
short, int, long, float, double, short decimal, UUID, time,
decimal, varchar, and char data types. This optimization can be disabled
with the parquet.optimized-reader.enabled catalog configuration property. ({issue}14423, {issue}14667)15257, {issue}15474)15268)DROP TABLE performance for tables stored on AWS S3. ({issue}13974)timestamp and
timestamp with timezone data types. ({issue}15204)15168)15374)row columns on Parquet files are
pushed into the connector. ({issue}15408)5729)REFRESH MATERIALIZED VIEW failure when the materialized view is based on
non-Iceberg tables. ({issue}13131)14971)INSERT failure for tables stored on S3. ({issue}15476)14734)15062)15240)query table function. ({issue}15329)mongodb.ssl.enabled configuration property to
mongodb.tls.enabled. ({issue}15240)15062)15226)mongodb.seeds and mongodb.credentials configuration
properties. ({issue}15263)1398)NullPointerException when a column name contains uppercase characters in
the query table function. ({issue}15294)objectid function is used more than
once within a single query. ({issue}15426)query table function contains a WITH clause. ({issue}15332)FULL JOIN is pushed down. ({issue}14841)ORDER BY ... LIMIT pushdown. ({issue}15365)DELETE. ({issue}15365)15365)redshift.use-legacy-type-mapping configuration property. ({issue}15365)ConnectorNodePartitioningProvider.getBucketNodeMap()
method. ({issue}14067)MERGE APIs in the engine to execute DELETE and UPDATE.
Require connectors to implement beginMerge() and related APIs.
Deprecate beginDelete(), beginUpdate() and UpdatablePageSource, which
are unused and do not need to be implemented. ({issue}13926)