docs/src/main/sphinx/release/release-423.md
RENAME COLUMN. ({issue}16757)SET DATA TYPE. ({issue}16959)18016)5061)INSERT and CREATE TABLE AS ... SELECT queries. ({issue}18212)18491)BETWEEN clauses. ({issue}18501)ORDER BY clauses in
views or WITH clauses. This may affect the semantics of queries that
incorrectly rely on implementation-specific behavior. The old behavior can be
restored via the skip_redundant_sort session property or the
optimizer.skip-redundant-sort configuration property. ({issue}18159)task.partitioned-writer-count and
task.scale-writers.max-writer-count configuration properties to reduce the
memory requirements of queries that write data. ({issue}18488)optimizer.use-mark-distinct configuration property,
which has been replaced with optimizer.mark-distinct-strategy. ({issue}18540)18383)EXPLAIN failure when a query contains WHERE ... IN (NULL). ({issue}18328)17853)12587)CASCADE option in DROP SCHEMA statements. ({issue}18305)COMMENT ON VIEW statement. ({issue}18516)$properties system table which can be queried to inspect Delta Lake
table properties. ({issue}17294)timestamp_ntz type. ({issue}17502)timestamp with time zone type on partitioned
columns. ({issue}16822)delta.query-partition-filter-required configuration property or the
query_partition_filter_required session property to true.
({issue}18345)$history system table. ({issue}18427)18564)18200)file_size_threshold as part of OPTIMIZE. ({issue}18388)18206)varchar to timestamp. ({issue}18014)18564)parquet.optimized-writer.enabled configuration property and the
parquet_optimized_writer_enabled session property. Replace the
parquet.optimized-writer.validation-percentage configuration property with
parquet.writer.validation-percentage. ({issue}18420)timestamp types to varchar for dates before 1900. ({issue}18004)timestamp values. ({issue}18003)file_size_threshold as part of OPTIMIZE. ({issue}18388)18206)") or a decimal column. ({issue}17775)hive.s3select-pushdown.experimental-textfile-pushdown-enabled
configuration property to enable S3 Select pushdown for TEXTFILE tables. ({issue}17775)18206)RENAME COLUMN. ({issue}16757)SET DATA TYPE. ({issue}16959)18016)tinyint and smallint types in the migrate procedure. ({issue}17946)18535)information_schema.columns queries for tables managed
by Trino with AWS Glue as metastore. ({issue}18315)system.metadata.table_comments when querying Iceberg
tables backed by AWS Glue as metastore. ({issue}18517)information_schema.columns when using the Glue
catalog. ({issue}18586)18564)file_size_threshold as part of OPTIMIZE. ({issue}18388)18206)18205)time columns. ({issue}15606)$path as part of
a WHERE clause. ({issue}18330)UPDATE, DELETE, and MERGE operations.
In rare situations this issue may have resulted in duplicate rows when
multiple operations were run at the same time, or at the same time as an
optimize procedure. ({issue}18533)ADD_DUMMY value for the kafka.empty-field-strategy
configuration property and the empty_field_strategy session property to
MARK ({issue}18485).18121)CASCADE option in DROP SCHEMA statements. ({issue}18305)char and decimal type. ({issue}18382)=, <>, IN, NOT IN, and LIKE
operators on case-sensitive varchar and nvarchar columns. ({issue}18140, {issue}18441)CASCADE option in DROP SCHEMA statements. ({issue}18305)timestamp types with non-millisecond precision. ({issue}17934)CASCADE option in DROP SCHEMA statements. ({issue}18305)CASCADE option in DROP SCHEMA statements. ({issue}18305)ConnectorMetadata.getTableHandle(ConnectorSession, SchemaTableName)
method signature. Connectors should implement
ConnectorMetadata.getTableHandle(ConnectorSession, SchemaTableName, Optional, Optional)
instead. ({issue}18596)supportsReportingWrittenBytes method from
ConnectorMetadata. ({issue}18617)