TinyButStrong Error in field [var.version...]: the key 'version' does not exist or is not set in VarRef. (VarRef seems refers to $GLOBALS) This message can be cancelled using parameter 'noerr'.

TinyButStrong Error in field [var.version...]: the key 'version' does not exist or is not set in VarRef. (VarRef seems refers to $GLOBALS) This message can be cancelled using parameter 'noerr'.
 MySQL 軟體歷史版本 Download Page13 :: 軟體兄弟

MySQL 歷史版本列表 Page13

最新版本 [var.version]

MySQL 歷史版本列表

MySQL 是一個開源的 RDBMS(關係數據庫管理系統),它支持用 C,C ++,Java,Perl 和 PHP 等各種編程語言編寫的請求。由於其高速度和靈活性,MySQL 已成為主要用於開發各種形狀和大小的 Web 應用程序的最流行的數據庫系統之一。自 1995 年上市以來,這種非常受歡迎的開源數據庫管理系統已經應用於當今幾乎所有互聯網用戶的無數項目中。今天一些最受歡迎的 MySQL 用戶是 ... MySQL 軟體介紹

MySQL (32-bit)MySQL (64-bit)MySQL WorkbenchMySQL Workbench (32-bit)MySQL Workbench (64-bit)


MySQL 8.0.23 (64-bit) 查看版本資訊

更新時間:2021-01-18
更新細節:

What's new in this version:

Added or Changed:
InnoDB: Performance was improved for the following operations:
- Dropping a large tablespace on a MySQL instance with a large buffer pool (>32GBs).
- Dropping a tablespace with a significant number of pages referenced from the adaptive hash index.
- Truncating temporary tablespaces.
- The pages of dropped or truncated tablespaces and associated AHI entries are now removed from the buffer pool passively as pages are encountered during normal operations. Previously, dropping or truncating tablespaces initiated a full list scan to remove pages from the buffer pool immediately, which negatively impacted performance Bug #98869)
- InnoDB: The new AUTOEXTEND_SIZE option defines the amount by which InnoDB extends the size of a tablespace when it becomes full, making it possible to extend tablespace size in larger increments. Allocating space in larger increments helps to avoid fragmentation and facilitates ingestion of large amounts of data. The AUTOEXTEND_SIZE option is supported with the CREATE TABLE, ALTER TABLE, CREATE TABLESPACE, and ALTER TABLESPACE statements. For more information, see Tablespace AUTOEXTEND_SIZE Configuration.
- An AUTOEXTEND_SIZE size column was added to the INFORMATION_SCHEMA.INNODB_TABLESPACES table.
- InnoDB: InnoDB now supports encryption of doublewrite file pages belonging to encrypted tablespaces. The pages are encrypted using the encryption key of the associated tablespace. For more information, see InnoDB Data-at-Rest Encryption.
- InnoDB: InnoDB atomics code was revised to use C++ std::atomic.
- When invoked with the --all-databases option, mysqldump now dumps the mysql database first, so that when the dump file is reloaded, any accounts named in the DEFINER clause of other objects will already have been created
- Some overhead for disabled Performance Schema and LOCK_ORDER tool instrumentation was identified and eliminated
- For BLOB and TEXT columns that have a default value expression, the INFORMATION_SCHEMA.COLUMNS table and SHOW COLUMNS statement now display the expression
- CRC calculations for binlog checksums are faster on ARM platforms. Thanks to Krunal Bauskar for the contributiong
- MySQL Server’s asynchronous connection failover mechanism now supports Group Replication topologies, by automatically monitoring changes to group membership and distinguishing between primary and secondary servers. When you add a group member to the source list and define it as part of a managed group, the asynchronous connection failover mechanism updates the source list to keep it in line with membership changes, adding and removing group members automatically as they join or leave. The new asynchronous_connection_failover_add_managed() and asynchronous_connection_failover_delete_managed() UDFs are used to add and remove managed sources.
- The connection is failed over to another group member if the currently connected source goes offline, leaves the group, or is no longer in the majority, and also if the currently connected source does not have the highest weighted priority in the group. For a managed group, a source’s weight is assigned depending on whether it is a primary or a secondary server. So assuming that you set up the managed group to give a higher weight to a primary and a lower weight to a secondary, when the primary changes, the higher weight is assigned to the new primary, so the replica changes over the connection to it. This function also applies to single (non- managed) servers, so the connection is now failed over if another source server is available that has a higher weighted priority.
- Replication channels can now be set to assign a GTID to replicated transactions that do not already have one, using the ASSIGN_GTIDS_TO_ANONYMOUS_TRANSACTIONS option of the CHANGE REPLICATION SOURCE TO statement. This feature enables replication from a source that does not use GTID-based replication, to a replica that does. For a multi-source replica, you can have a mix of channels that use ASSIGN_GTIDS_TO_ANONYMOUS_TRANSACTIONS, and channels that do not. The GTID can include the replica’s own server UUID or a server UUID that you assign to identify transactions from different sources.
- Note that a replica set up with ASSIGN_GTIDS_TO_ANONYMOUS_TRANSACTIONS on any channel cannot be promoted to replace the replication source server in the event that a failover is required, and a backup taken from the replica cannot be used to restore the replication source server. The same restriction applies to replacing or restoring other replicas that use ASSIGN_GTIDS_TO_ANONYMOUS_TRANSACTIONS on any channel. The GTID set (gtid_executed) from a replica set up with ASSIGN_GTIDS_TO_ANONYMOUS_TRANSACTIONS is nonstandard and should not be transferred to another server, or compared with another server's gtid_executed set.
- For a multithreaded replica (where slave_parallel_workers is greater than 0), setting slave_preserve_commit_order=1 ensures that transactions are executed and committed on the replica in the same order as they appear in the replica's relay log. Each executing worker thread waits until all previous transactions are committed before committing. If a worker thread fails to execute a transaction because a possible deadlock was detected, or because the transaction's execution time exceeded a relevant wait timeout, it automatically retries the number of times specified by slave_transaction_retries before stopping with an error. Transactions with a non-temporary error are not retried.
- The replication applier on a multithreaded replica has always handled data access deadlocks that were identified by the storage engines involved. However, some other types of lock were not detected by the replication applier, such as locks involving access control lists (ACLs) or metadata locking (for example, FLUSH TABLES WITH READ LOCK statements). This could lead to three-actor deadlocks with the commit order locking, which could not be resolved by the replication applier, and caused replication to hang indefinitely. From MySQL 8.0.23, deadlock handling on multithreaded replicas that preserve the commit order has been enhanced to mitigate these types of deadlocks. The deadlocks are not specifically resolved by the replication applier, but the applier is aware of them and initiates automatic retries for the transaction, rather than hanging. If the retries are exhausted, replication stops in a controlled manner so that the deadlock can be resolved manually.
- The new temptable_max_mmap variable defines the maximum amount of memory the TempTable storage engine is permitted to allocate from memory-mapped temporary files before it starts storing data to InnoDB internal temporary tables on disk. A setting of 0 disables allocation of memory from memory-mapped temporary files. For more information, see Internal Temporary Table Use in MySQL.

Fixed:
- InnoDB: A CREATE TABLE operation that specified the COMPRESSION option was permitted with a warning on a system that does not support hole punching. The operation now fails with an error instead
- InnoDB: A MySQL DB system restart following an upgrade that was initiated while a data load operation was in progress raised an assertion failure
- InnoDB: An error message regarding the number of truncate operations on the same undo tablespace between checkpoints incorrectly indicated a limit of 64. The limit was raised from 64 to 50,000 in MySQL 8.0.22
- InnoDB: rw_lock_t and buf_block_t source code structures were reduced in size
- InnoDB: An InnoDB transaction became inconsistent after creating a table using a storage engine other than InnoDB from a query expression that operated on InnoDB tables
- InnoDB: In some circumstances, such as when an existing gap lock inherits a lock from a deleted record, the number of locks that appear in the INFORMATION_SCHEMA.INNODB_TRX table could diverge from the actual number of record locks.
- Thanks to Fungo Wang from Alibaba for the patch
- InnoDB: An off-by-one error in Fil_system sharding code was corrected, and the maximum number of shards (MAX_SHARDS) was changed to 69
- InnoDB: The TempTable storage engine memory allocator allocated extra blocks of memory unnecessarily
- InnoDB: A SELECT COUNT(*) operation on a table containing uncommitted data performed poorly due to unnecessary I/O.
- Thanks to Brian Yue for the contribution
- InnoDB: A race condition when shutting down the log writer raised an assertion failure
- InnoDB: Page cleaner threads were not utilized optimally in sync-flush mode, which could cause page flush operations to slow down or stall in some cases. Sync-flush mode occurs when InnoDB is close to running out of free space in the redo log, causing the page cleaner coordinator to initiate aggressive page flushing
- InnoDB: A high frequency of updates while undo log truncation was enabled caused purge to lag. The lag was due to the innodb_purge_rseg_truncate_frequency setting being changed temporarily from 128 to 1 when an undo tablespace was selected for truncation. The code that modified the setting has been removed
- InnoDB: Automated truncation of undo tablespaces caused a performance regression. To address this issue, undo tablespace files are now initialized at 16MB and extended by a minimum of 16MB. To handle aggressive growth, the file extension size is doubled if the previous file extension happened less than 0.1 seconds earlier. Doubling of the extension size can occur multiple times to a maximum of 256MB. If the previous file extension occurred more than 0.1 seconds earlier, the extension size is reduced by half, which can also occur multiple times, to a minimum of 16MB. Previously, the initial size of an undo tablespace depended on the InnoDB page size, and undo tablespaces were extended four extents at a time.
- If the AUTOEXTEND_SIZE option is defined for an undo tablespace, the undo tablespace is extended by the greater of the AUTOEXTEND_SIZE setting and the extension size determined by the logic described above.
- When an undo tablespace is truncated, it is normally recreated at 16MB in size, but if the current file extension size is larger than 16MB, and the previous file extension happened within the last second, the new undo tablespace is created at a quarter of the size defined by the innodb_max_undo_log_size variable.
- Stale undo tablespace pages are no longer removed at the next checkpoint. Instead, the pages are removed in the background by the InnoDB master thread Bug #32020900, Bug #101194)
- InnoDB: A posix_fallocate() failure while preallocating space for a temporary tablespace raised an error and caused an initialization failure. A warning is now issued instead, and InnoDB falls back to the non-posix_fallocate() method for preallocating space
- InnoDB: An invalid pointer caused a shutdown failure on a MySQL Server compiled with the DISABLE_PSI_MEMORY source configuration option enabled
- InnoDB: A long SX lock held by an internal function that calculates new statistics for a given index caused a failure
- InnoDB: The INFORMATION_SCHEMA.INNODB_TABLESPACES table reported a FILE_SIZE of 0 for some tables and schemas. When the associated tablespace was not in the memory cache, the tablespace name was used to determine the tablespace file name, which was not always a reliable method. The tablespace ID is now used instead. Using the tablespace name remains as a fallback method
- InnoDB: After dropping a FULLTEXT index and renaming the table to move it to a new schema, the FULLTEXT auxiliary tables were not renamed accordingly and remained in the old schema directory
- InnoDB: After upgrading to MySQL 8.0, a failure occurred when attempting to perform a DML operation on a table that was previously defined with a full-text search index
- InnoDB: Importing a tablespace with a page-compressed table did not report a schema mismatch error for source and destination tables defined with a different COMPRESSION setting. The COMPRESSION setting of the exported table is now saved to the .cfg metadata file during the FLUSH TABLES ... FOR EXPORT operation, and that information is checked on import to ensure that both tables are defined with the same COMPRESSION setting
- InnoDB: Dummy keys used to check if the MySQL Keyring plugin is functioning were left behind in an inactive state, and the number of inactive dummy keys increased over time. The actual master key is now used instead, if present. If no master key is available, a dummy master key is generated
- InnoDB: Querying the INFORMATION_SCHEMA.FILES table after moving the InnoDB system tablespace outside of the data directory raised a warning indicating that the innodb_system filename is unknown
- InnoDB: In a replication scenario involving a replica with binary logging or log_slave_updates disabled, the server failed to start due to an excessive number of gaps in the mysql.gtid_executed table. The gaps occurred for workloads that included both InnoDB and non-InnoDB transactions. GTIDs for InnoDB transactions are flushed to the mysql.gtid_executed table by the GTID persister thread, which runs periodically, while GTIDs for non-InnoDB transactions are written to the to the mysql.gtid_executed table directly by replica server threads. The GTID persister thread fell behind as it cycled through merging entries and compressing the mysql.gtid_executed table. As a result, the size of the GTID flush list for InnoDB transactions grew over time along with the number of gaps in the mysql.gtid_executed table, eventually causing a server failure and subsequent startup failures. To address this issue, the GTID persister thread now writes GTIDs for both InnoDB and non-InnoDB transactions, and foreground commits are forced to wait if the GTID persister thread falls behind. Also, the gtid_executed_compression_period default setting was changed from 1000 to 0 to disabled explicit compression of the mysql.gtid_executed table by default.
- Thanks to Venkatesh Prasad for the contribution
- InnoDB: Persisting GTID values for XA transactions affected XA transaction performance. Two GTID values are generated for XA transactions, one for the prepare stage and another for the commit stage. The first GTID value is written to the undo log and later overwritten by the second GTID value. Writing of the second GTID value could only occur after flushing the first GTID value to the gtid_executed table. Space is now reserved in the undo log for both XA transaction GTID values
- InnoDB: InnoDB source files were updated to address warnings produced when building Doxygen source code documentation
- InnoDB: The full-text search synchronization thread attempted to read a previously-freed word from the index cache
- InnoDB: A 20µs sleep in the buf_wait_for_read() function introduced with parallel read functionality in MySQL 8.0.17 took 1ms on Windows, causing an unexpected timeout when running certain tests. Also, AIO threads were found to have uneven amounts of waiting operating system IO requests
- InnoDB: Cleanup in certain replicated XA transactions failed to reattach transaction object (trx_t), which raised an assertion failure
- InnoDB: The tablespace encryption type setting was not properly updated due to a failure during the resumption of an ALTER TABLESPACE ENCRYPTION operation following a server failure
- InnoDB: An interrupted tablespace encryption operation did not update the encrypt_type table option information in the data dictionary when the operation resume processing after the server was restarted
- InnoDB: Internal counter variables associated with thread sleep delay and threads entering an leaving InnoDB were revised to use C++ std::atomic. Built-in atomic operations were removed. Thanks to Yibo Cai from ARM for the contribution
- InnoDB: A relaxed memory order was implemented for dictionary memory variable fetch-add (dict_temp_file_num.fetch_add) and store (dict_temp_file_num.store) operations.
- InnoDB: A background thread that resumed a tablespace encryption operation after the server started failed to take an metadata lock on the tablespace, which permitted concurrent DDL operations and led to a race condition with the startup thread. The startup thread now waits until the tablespace metadata lock is taken
- InnoDB: Calls to numa_all_nodes_ptr were replaced by the numa_get_mems_allowed() function. Thanks to Daniel Black for the contribution
- Partitioning: ALTER TABLE t1 EXCHANGE PARTITION ... WITH TABLE t2 led to an assert when t1 was not a partitioned tableug
- Replication: The network_namespace parameter for the asynchronous_connection_failover_add_source() and asynchronous_connection_failover_delete_source() UDFs is no longer used from MySQL 8.0.23. These UDFs add and remove replication source servers from the source list for a replication channel for the asynchronous connection failover mechanism. The network namespace for a replication channel is managed using the CHANGE REPLICATION SOURCE statement, and has special requirements for Group Replication source servers, so it should no longer be specified in the UDFs
- Replication: When the system variable transaction_write_set_extraction=XXHASH64 is set, which is the default in MySQL 8.0 and a requirement for Group Replication, the collection of writes for a transaction previously had no upper size limit. Now, for standard source to replica replication, the numeric limit on write sets specified by binlog_transaction_dependency_history_size is applied, after which the write set information is discarded but the transaction continues to execute. Because the write set information is then unavailable for the dependency calculation, the transaction is marked as non-concurrent, and is processed sequentially on the replica. For Group Replication, the process of extracting the writes from a transaction is required for conflict detection and certification on all group members, so the write set information cannot be discarded if the transaction is to complete. The byte limit set by group_replication_transaction_size_limit is applied instead of the numeric limit, and if the limit is exceeded, the transaction fails to execute
- Replication: When mysqlbinlog’s --print-table-metadata option was used, mysqlbinlog used a different method for assessing numeric fields to the method used by the server when writing to the binary log, resulting in incorrect metadata output relating to these fields. mysqlbinlog now uses the same method as the server
- Replication: When using network namespaces in a replication channel and the initial connection from the replica to the master was interrupted, subsequent connection attempts failed to use the correct namespace information
- Replication: If the Group Replication applier channel (group_replication_applier) was holding a lock on a table, for example because of a backup in progress, and the member was expelled from the group and tried to rejoin automatically, the auto-rejoin attempt was unsuccessful and did not retry. Now, Group Replication checks during startup and rejoin attempts whether the group_replication_applier channel is already running. If that is the case at startup, an error message is returned. If that is the case during an auto-rejoin attempt, that attempt fails, but further attempts are made as specified by the group_replication_autorejoin_tries system variable
- Replication: If a group member was expelled and made an auto-rejoin attempt at a point when some tables on the instance were locked (for example while a backup was running), the attempt failed and no further attempts were made. This scenario is now handled correctly
- Replication: As the number of replicas replicating from a semisynchronous source server increased, locking contention could result in a performance degradation. The locking mechanisms used by the plugins have been changed to use shared locks where possible, avoid unnecessary lock acquisitions, and limit callbacks. The new behaviors can be implemented by enabling the following system variables:
- replication_sender_observe_commit_only=1 limits callbacks.
- replication_optimize_for_static_plugin_config=1 adds shared locks and avoids unnecessary lock acquisitions. This system variable must be disabled if you want to uninstall the plugin.
- Both system variables can be enabled before or after installing the semisynchronous replication plugin, and can be enabled while replication is running. Semisynchronous replication source servers can also get performance benefits from enabling these system variables, because they use the same locking mechanisms as the replicas
- Replication: On a multi-threaded replica where the commit order is preserved, worker threads must wait for all transactions that occur earlier in the relay log to commit before committing their own transactions. If a deadlock occurs because a thread waiting to commit a transaction later in the commit order has locked rows needed by a transaction earlier in the commit order, a deadlock detection algorithm signals the waiting thread to roll back its transaction. Previously, if transaction retries were not available, the worker thread that rolled back its transaction would exit immediately without signalling other worker threads in the commit order, which could stall replication. A worker thread in this situation now waits for its turn to call the rollback function, which means it signals the other threads correctly Bug #87796)
- Replication: GTIDs are only available on a server instance up to the number of non-negative values for a signed 64-bit integer (2 to the power of 63 minus 1). If you set the value of gtid_purged to a number that approaches this limit, subsequent commits can cause the server to run out of GTIDs and take the action specified by binlog_error_action. From MySQL 8.0.23, a warning message is issued when the server instance is approaching the limit
- Microsoft Windows: On Windows, running the MySQL server as a service caused shared-memory connections to fail
- JSON: JSON_ARRAYAGG() did not always perform proper error handling Bug #32012559, Bug #32181438)
- JSON: When updating a JSON value using JSON_SET(), JSON_REPLACE(), or JSON_REMOVE(), the target column can sometimes be updated in-place. This happened only when the target table of the update operation was a base table, but when the target table was an updatable view, the update was always performed by writing the full JSON value.
- Now in such cases, an in-place update (that is, a partial update) is also performed when the target table is an updatable view
- JSON: Work done in MySQL 8.0.22 to cause prepared statements to be prepared only once introduced a regression in the handling of dynamic parameters to JSON functions. All JSON arguments were classified as data type MYSQL_TYPE_JSON, which overlooked the fact that JSON functions take two kinds of JSON parameters—JSON values and JSON documents—and this distinction cannot be made with the data type only. For Bug #31667405, this problem was solved for comparison operators and the IN() operator by making it possible to tag a JSON argument as being a scalar value, while letting arguments to other JSON functions be treated as JSON documents.
- The present fix restores for a number of JSON functions their treatment of certain arguments as JSON values, as listed here:
- The first argument to MEMBER OF()
- The third, fifth, seventh, and subsequent odd-numbered arguments to the functions JSON_INSERT(), JSON_REPLACE(), JSON_SET(), JSON_ARRAY_APPEND(), and JSON_ARRAY_INSERT()
- JSON: When mysqld was run with --debug, attempting to execute a query that made use of a multi-valued index raised an errorg
- Use of the thread_pool plugin could result in Address Sanitizer warnings
- While pushing a condition down to a materialized derived table, and a condition is partially pushed down, the optimizer may, in some cases in which a query transformation has added new conditions to the WHERE condition, call the internal fix_fields() function for the condition that remains in the outer query block. A successful return from this function call was misinterpreted as an error, leading to the silent failure of the original statement
- Multiple calls to a stored procedure containing an ALTER TABLE statement that included an ORDER BY clause could cause a server exit
- Prepared statements involving stored programs could cause heap-use-after-free memory problems
- Queries on INFORMATION_SCHEMA tables that involved materialized derived tables could fail
- A potential buffer overflow was fixed. Thanks to Sifang Zhao for pointing out the issue, and for suggesting a fix (although it was not used)
- Conversion of FLOAT values to values of type INT could generate Undefined Behavior Sanitizer warnings
- In multiple-row queries, the LOAD_FILE() function evaluated to the same value for every row
- Generic Linux tar file distributions had too-restrictive file permissions after unpacking, requiring a manual chmod to correct
- For debug builds, prepared SET statements containing subqueries in stored procedures could raise an assertion
- For prepared statements, illegal mix of collations errors could occur for legal collation mixes
- The functions REGEXP_LIKE(), REGEXP_INSTR(), and REGEXP_REPLACE() raise errors for malformed regular expression patterns, but could also return NULL for such cases, causing subsequent debug asserts. Now we ensure that these functions do not return NULL except in certain specified cases.
- The function REGEXP_SUBSTR() can always return NULL, so no such check is needed, and for this function we make sure that one is not performed
- Testing an aggregate function for IS NULL or IS NOT NULL in a HAVING condition using WITH ROLLUP led to wrong results
- When a new aggregate function was added to the current query block because an inner query block had an aggregate function requiring evaluation in the current one, the server did not add rollup wrappers to it as needed
- For debug builds, certain CREATE TABLE statements with CHECK constraints could raise an assertion
- Incorrect BLOB field values were passed from InnoDB during a secondary engine load operation
- The LOCK_ORDER tool did not correctly represent InnoDB share exclusive locks
- The server did not handle properly an error raised when trying to use an aggregation function with an invalid column type as part of a hash join
- The length of the WORD column of the INFORMATION_SCHEMA.KEYWORDS table could change depending on table contents
- The Performance Schema host_cache table was empty and did not expose the contents of the host cache if the Performance Schema was disabled. The table now shows cache contents regardless of whether the Performance Schema is enabled
- A HANDLER READ statement sometimes hit an assert when a previous statement did not restore the original value of THD::mark_used_columns after use
- Importing a compressed table could cause an unexpected server exit if the table contained values that were very large when uncompressed
- Removed a memory leak that could occur when a subquery using a hash join and LIMIT was executed repeatedly
- A compilation failure on Ubuntu was corrected
- Memory used for storing partial-revokes information could grow excessively for sessions that executed a large number of statements
- The server did not handle all cases of the WHERE_CONDITION optimization correctly
- FLUSH TABLES WITH READ LOCK could block other sessions from executing SHOW TABLE STATUS
- In some cases, MIN() and MAX() incorrectly returned NULL when used as window functions with temporal or JSON values as arguments
- GRANT ... GRANT OPTION ... TO and GRANT ... TO .. WITH GRANT OPTION sometimes were not correctly written to the server logs
- For debug builds, CREATE TABLE using a partition list of more than 256 entries raised an assertion
- It was possible for queries in the file named by the init_file system variable to cause server startup failure
- When performing a hash join, the optimizer could register a false match between a negative integer value and a very large unsigned integer value
- SHOW VARIABLES could report an incorrect value for the partial_revokes system variable
- In the Performance Schema user_defined_functions table, the value of the UDF_LIBRARY column is supposed to be NULL for UDFs registered via the service API. The value was incorrectly set to the empty string
- The server automatic upgrade procedure failed to upgrade older help tables that used the latin1 character set
- Duplicate warnings could occur when executing an SQL statement that read the grant tables in serializable or repeatable-read transaction isolation level
- In certain queries with DISTINCT aggregates (which in general are solved by sorting before aggregation), the server used a temporary table instead of streaming due to the mistaken assumption that the logic for handling the temporary table performed deduplication. Now the server checks for the implied unique index instead, which is more robust and allows for the removal of unnecessary logic
- Certain combinations of lower_case_table_names values and schema names in Event Scheduler event definitions could cause the server to stall
- Calling one stored function from within another could produce a conflict in field resolution, resulting in a server exit
- User-defined functions defined without a udf_init() method could cause an unexpected server exit
- Setting the secure_file_priv system variable to NULL should disable its action, but instead caused the server to create a directory named NULL
- mysqlpump could exit unexpectedly due to improper simultaneous accesses to shared structures
- Uninstalling a component and deregistering user-defined functions (UDFs) installed by the component was not properly synchronized with whether the UDFs were currently in use
- Cleanup following execution of a prepared statement that performed a multi-table UPDATE or DELETE was not always done correctly, which meant that, following the first execution of such a prepared statement, the server reported a nonzero number of rows updated, even though no rows were actually changed
- For the engines which support primary key extension, when the total key length exceeded MAX_KEY_LENGTH or the number of key parts exceeded MAX_REF_PARTS, key parts of primary keys which did not fit within these limits were not added to the secondary key, but key parts of primary keys were unconditionally marked as part of secondary keys.
- This led to a situation in which the secondary key was treated as a covering index, which meant sometimes the wrong access method was chosen.
- This is fixed by modifying the way in which key parts of primary keys are added to secondary keys so that those which do not fit within which do not fit within the limits mentioned previously mentioned are cleared
- When MySQL is configured with -DWITH_ICU=system, CMake now checks that the ICU library version is sufficiently recent
- When invoked with the --binary-as-hex option, mysql displayed NULL values as empty binary strings (0x).
- Selecting an undefined variable returned the empty binary string (0x) rather than NULL
- Enabling DISABLE_PSI_xxx Performance Schema-related CMake options caused build failures
- Some queries returned different results depending on the value of internal_tmp_mem_storage_engine.
- The root cause of this issue related to the fact that, when buffering rows for window functions, if the size of the in-memory temporary table holding these buffered rows exceeds the limit specified, a new temporary table is created on disk; the frame buffer partition offset is set at the beginning of a new partition to the total number of rows that have been read so far, and is updated specifically for use when the temporary table is moved to disk (this being used to calculate the hints required to process window functions). The problem arose because the frame buffer partition offset was not updated for the specific case when a new partition started while creating the temporary table on disk, which caused the wrong rows to be read.
- This issue is fixed by making sure to update the frame buffer partition offset correctly whenever a new partition starts while a temporary table is moved to disk
- While buffering rows for window functions, if the size of the in-memory temporary table holding these buffered rows exceeds the limit specified by temptable_max_ram, a new temporary table is created on disk. After the creation of the temporary table, hints used to process window functions need to be reset, since the temporary table is now moved to disk, making the existing hints unusable. When the creation of the temporary table on disk occurred when the first row in the frame buffer was being processed, the hints had not been initialized and trying to reset these uninitialized hints resulted in an unplanned server exit.
- This issue is fixed by adding a check to verify whether frame buffer hints have been initialized, prior to resetting them
- The Performance Schema could produce incorrect results for joins on a CHANNEL_NAME column when the index for CHANNEL_NAME was disabled with USE INDEX ()
- When removing unused window definitions, a subquery that was part of an ORDER BY was not removed
- In certain cases, the server did not handle multiply-nested subqueries correctly
- The recognized syntax for a VALUES statement includes an ORDER BY clause, but this clause was not resolved, so the execution engine could encounter invalid data
- The server attempted to access a non-existent temporary directory at startup, causing a failure. Checks were added to ensure that temporary directories exist, and that files are successfully created in the tmpdir directory
- While removing redundant sorting, a window's ordering was removed due to the fact that rows were expected to come in order because of the ordering of another window. When the other window was subsequently removed because it was unused, this resulted in unordered rows, which was not expected during evaluation.
- Now in such cases, removal of redundant sorts is not performed until after any unused windows have been removed. In addition, resolution of any rollups has been moved to the preparation phase
- Semisynchronous replication errors were incorrectly written to the error log with a subsystem tag of Server. They are now written with a tag of Repl, the same as for other replication errors
- A user could grant itself as a role to itself
- The server did not always correctly handle cases in which multiple WHERE conditions, one of which was always FALSE, referred to the same subquery
- With a lower_case_table_names=2 setting, InnoDB background threads sometimes acquired table metadata locks using the wrong character case for the schema name part of a lock key, resulting in unprotected metadata and race conditions. The correct character case is now applied. Changes were also implemented to prevent metadata locks from being released before corresponding data dictionary objects, and to improve assertion code that checks lock protection when acquiring data dictionary objects
- If a CR_UNKNOWN_ERROR was to be sent to a client, an exception occurred
- Conversion of DOUBLE values to values of type BIT, ENUM, or SET could generate Undefined Behavior Sanitizer warnings
- Certain accounts could cause server startup failure if the skip_name_resolve system variable was enabled
- Client programs could unexpectedly exit if communication packets contained bad data
- A buffer overflow in the client library was fixed
- When creating a multi-valued or other functional index, a performance drop was seen when executing a query against the table on which the index was defined, even though the index itself was not actually used. This occurred because the hidden virtual column that backs such indexes was evaluated unnecessarily for each row in the query
- CMake checks for libcurl dependencies were improved
- mysql_config_editor incorrectly treated # in password values as a comment character
- In some cases, the optimizer attempted to compute the hash value for an empty string. Now a fixed value is always used instead
- The INSERT() and RPAD() functions did not correctly set the character set of the result
- Some corner cases for val1 BETWEEEN val2 AND val3 were fixed, such as that -1 BETWEEN 9223372036854775808 AND 1 returned true
- For the Performance Schema memory_summary_global_by_event_name table, the low watermark columns could have negative values, and the high watermark columns had ever-increasing values even when the server memory usage did not increase
- Several issues converting strings to numbers were fixed
- Certain group by queries that performed correctly did not return the expected result when WITH ROLLUP was added. This was due to the fact that decimal information was not always correctly piped through rollup group items, causing functions returning decimal values such as TRUNCATE() to receive data of the wrong typeug
- When creating fields for materializing temporary tables (that is, when needing to sort a join), the optimizer checks whether the item needs to be copied or is only a constant. This was not done correctly in one specific case; when performing an outer join against a view or derived table containing a constant, the item was not properly materialized into the table, which could yield spurious occurrences of NULL in the resultug
- When REGEXP_REPLACE() was used in an SQL statement, the internal function Regexp_engine::Replace() did not reset the error code value after handling a record, which could affect processing of the next record, which lead to issues.
- Our thanks to Hope Lee for the contributionug
- For a query having the following form, the column list sometimes assumed an inconsistent state after temporary tables were created, causing out-of-bounds indexing later:
SELECT * FROM (
SELECT PI()
FROM t1 AS table1, t1 AS table2
ORDER BY PI(), table1.a
) AS d1;

- When aggregating data that was already sorted (known as performing streaming aggregation, due to no temporary tables being used), it was not possible to determine when a group ended until processing the first row in the next group, by which time the group expressions to be output were often already overwritten.
- This is fixed by replacing the complex logic previously used with the much simpler method of saving a representative row for the group when encountering it the first time, so that its columns can easily be retrieved for the output row when neededug
- Subqueries making use of fulltext matching might not perform properly when subquery_to_derived was enabled, and could lead to an assert in debug buildsug
- When an ALTER TABLE ... CONVERT TO CHARACTER SET statement is executed, the character set of every CHAR, VARCHAR, and TEXT column in the table is updated to the new CHARACTER SET value. This change was also applied to the hidden CHAR column used by an ARRAY column for a multi-valued index; since the character set of the hidden column must be one of my_charset_utf8mb4_0900_bin or binary, this led to an assert in debug builds of the server.
- This issue is resolved by no longer setting the character set of the hidden column to that of the table when executing the ALTER TABLE statement referenced previously; this is similar to what is done for BLOB columns in similar circumstancesg
- In some cases, the server's internal string-conversion routines had problems handling floating-point values which used length specifiers and triggered use of scientific notationg

MySQL Workbench 8.0.23 查看版本資訊

更新時間:2021-01-18
更新細節:

TickTick 3.7.7.1 (64-bit) 查看版本資訊

更新時間:2021-01-13
更新細節:

Kinza 6.7.2 (64-bit) 查看版本資訊

更新時間:2021-01-09
更新細節:

What's new in this version:

- Support for minor Chromium upgrade (87.0.4280.88 → 87.0.4280.141)

rekordbox 6.4.2 查看版本資訊

更新時間:2020-12-22
更新細節:

What's new in this version:

Improved:
- New keyboard shortcuts have now been added for LIGHTING mode
- Notes have now been added for fixture information in the search results in the FIXTURE LIBRARY screen for LIGHTING mode
- The file type of TIDAL tracks will now appear as TIDAL(FLAC) or TIDAL(AAC)

Fixed:
- Occasionally tracks were deleted incorrectly when attempting to delete a track in a playlist in Export mode
- At times the Library conversion stopped mid way
- Potential crash after canceling library conversion
- Potential security issue
- Improved stability and fixes for other minor issues

ProPresenter 7.4.0 查看版本資訊

更新時間:2020-12-17
更新細節:

What's new in this version:

New:
- Adds support for more native languages than ever before including: English, Spanish, Portuguese, French, German, Korean, Russian, & Norwegian
- Adds new logic that allows for backgrounds videos to no longer activate when being triggered a second time
- Adds the ability to set line fill to current line width or max line width
- Adds multiple additions to linked objects including: Current & Next Slide Group Name, Current & Next Playlist Item, Number of Remaining Slides, & so much more!
- Adds in-app licensing option for "active" or "inactive" allowing users to use a computer registered as "inactive" for editing with Bibles

Fixed:
- an issue where clearing to logo does not transition as expected
- an issue with licenses not appearing for the RV60 Bible translation
- an issue with audio inputs that operate at a sample rate other than 48k
- a bug that causes the stage layout selection to be lost from a stage action on application restart
- a bug that causes the clear action to only cut instead of using the selected transition
- an issue with slide builds only using the cut transition instead of the specified transition
- a bug that prevents triggering with number entry if the keyboard focus is in the playlist view
- a bug that causes memory usage to grow while playing back a video
- a bug that causes the media export progress bar to not accurately show progress
- an issue that causes memory usage to grow if a PowerPoint file is in a smart playlist
- a bug where a look change causes the slide background color to inadvertently show
- an issue that causes MIDI actions to not import on macOS when created on Windows
- a bug where media effects are lost on playlist import
- an issue with saving the playlist data if there are multiple group folders
- a bug that causes the incorrect arrangement to be selected after importing a playlist
- a crash that could occur when importing PowerPoint files
- an issue where stage actions do not always trigger on the first click
- an issue where a chord chart might not update on the first click
- an issue with theme importing not copying media to the assets folder
- an issue with theme export not including attached media
- an issue with SDI and NDI outputs sending audio on startup when the option is not selected
- an issue with the next slide not always appearing on the stage screen
- a bug that prevents pasting text from Microsoft Word when theme fonts are selected
- a bug with the looks window not properly aligning the checkboxes

TickTick 3.7.6.2 (64-bit) 查看版本資訊

更新時間:2020-12-16
更新細節:

更新時間:2020-12-09
更新細節:

What's new in this version:

- When developing Qt 6, we had an in-depth look at some of Qt's most central parts to identify how we could improve them. We discovered a couple of core focus areas that we invested considerable time in improving. Those areas include:
C++17:
- With Qt 6 we now require a C++17 compatible compiler enabling the use more modern C++ language constructs when developing Qt and also allows for integration points on the API side.
- Core libraries and APIs
- Much work has gone into Qt Core, as it is the module that implements the most central parts of Qt. We've gone through many areas there and made improvements. To name some of the most central ones:
- The new property and binding system: This system now brings the concept of bindings that made QML such a huge success in Qt 5 available from C++.
- Strings and Unicode: With Qt 5, we started aligning Qt fully with Unicode, where we completed a lot of the work, but a few items remained that we now cleaned up for Qt 6. More details will come in a separate blog post later on.
- QList has been a class that was often criticized in Qt 5, as it was heap allocating objects stored in there that were larger than a pointer, leading to pressure on heap allocation methods. In Qt 6, we changed this and unified QList and QVector into one class. See our blog post about QList in Qt 6 for details.
- QMetaType and QVariant are fundamental to how our Qt’s meta-object system works. Signals and slots would not be possible without QMetaType and QVariant is required for dynamic invocations. Those two classes got an almost complete rewrite with Qt 6, and you can read about the details here.
- Other parts of Qt that are not related to graphics have also seen large changes. For example, Qt Concurrent has undergone an almost complete rewrite and now makes development of multi-threaded applications more effortless than ever. Qt Network has seen lots of clean-up and improvements.

New graphics architecture:
- The graphics architecture of Qt 5 was very much dependent on OpenGL as the underlying 3D graphics API. While this was the right approach in 2012 when we created Qt 5, the market around us has changed significantly over the last couple of years with the introduction of Metal and Vulkan. We now have a large set of different graphics APIs that are commonly being used on different platforms. For Qt as a cross-platform framework, this, of course, meant that we had to adjust to this and ensure our users can run Qt on all of them with maximum performance.
- So while Qt 5 relied on OpenGL for hardware-accelerated graphics, the picture completely changes with Qt 6. All of our 3D graphics in Qt Quick is now built on top of a new abstraction layer for 3D graphics called RHI (Rendering Hardware Interface). RHI makes it possible for Qt to use the native 3D graphics API of the underlying OS/platform. So Qt Quick will now use Direct3D on Windows and Metal on macOS by default. For details, have a look at the blog post series about the RHI.
- The OpenGL specific classes in Qt still exist, but are now moved out of QtGui in the QtOpenGL module. We also added a new module called QtShaderTools to deal with the different shading languages of those APIs in a cross-platform way.

Qt Quick 3D and Qt 3D:
- Qt Quick 3D is a relatively new module. It seamlessly extends Qt Quick with 3D capabilities. With Qt Quick 3D, our focus was to create an API that is as easy to use as the existing parts of Qt Quick (for 2D user interfaces) while providing full support for creating complex 3D scenes. The main goal behind this effort has been to enable seamless integration between 2D and 3D content.
- This module has seen significant improvements with Qt 6 that we wouldn’t have been able to do in the Qt 5 series. Most importantly it is now always using the RHI abstraction layer to make optimal use of the underlying graphics API and Hardware. Additionally, it now features a much deeper and more performant integration between 2D and 3D content, allowing you to place 2D items into a 3D scene. It also has vastly improved support for glTF2 and physics-based rendering, making it trivial to import assets created in other design tools. There are many other major improvements in the module, a more in-depth description can be found in a separate blog post.
- Qt 3D is now also based on top of the RHI abstraction layer, has seen some performance improvements and cleanups. You can find more details in two blog posts by our partner KDAB (here and here).

Desktop styling for Qt Quick:
- When we created the set of controls for Qt Quick, our focus was to make them lightweight and performant. For that reason, they did not support desktop styling in Qt 5. However, in Qt 6, we found a way to make them look & feel native on desktop operating systems. With 6.0, Qt Quick now supports native styling on both macOS and Windows. See this blog post for details. Native look & feel for Android and Linux already existed with the Material and Fusion styles in Qt 5. We are improving those for future Qt releases and are also planning to implement a native style for iOS.
- Interfacing with platform specific functionality:
- Even with Qt offering most functionality required to develop your application platform-independently, there is sometimes a need to interface with platform-specific functionality. In Qt 5, we provided a set of add-on modules (QtX11Extras, QtWinExtras, QtMacExtras) to help with this purpose. But this full separation from the rest of Qt has led to a couple of architectural issues, inconsistencies and code duplication within Qt. In Qt 6, we made an effort to clean this up and fold the functionality offered by those add-on modules into platform specific APIs offered directly in Qt. This will make interfacing with OS/platform-specific APIs much easier in Qt 6. Have a look here for more details.

Build system and Packaging:
- We also made some considerable changes in how we build and distribute Qt. Worth mentioning is that Qt 6 itself is now built using CMake. This has also led to significant improvements for all our users that use CMake to build their projects. We will continue to support qmake for the lifetime of Qt 6, so there is no need to make any changes to your build system if you're using it, but we recommend to use CMake for all new projects.
- Qt 6 also comes with a much smaller default package, and many of the add-ons are now distributed as separate packages through a package manager. This gives us more flexibility in adapting release schedules of add-ons to market requirements, allowing, for example, for more frequent feature releases as the core Qt packages or making them available for multiple Qt versions at the same time. In addition, we can use the package manager as a delivery channel for 3rd party content. And finally, it gives our users more flexibility as they can choose to download only what they really need.
- Currently, we are using the existing Qt installer as the backend for the package manager, but are investigating alternatives for future releases. See the blog post here for more details.

Compatibility:
- When making changes for Qt 6, we’ve tried to adjust our APIs to what we believe is required for the future while at the same time trying to break as little as possible for our existing users. While your code will need some adjustments to make the best possible use of Qt 6, we have tried to make porting to the new version as easy as possible
- One of the first things we did was to clean up our codebase. During the lifetime of Qt 5, we deprecated quite a few APIs and even entire modules. The first thing we did was to remove those to get to a leaner Qt for the future and allow us to leave some things behind that do not make sense anymore today
- However, we have taken care to mark as many of those APIs as possible as deprecated in Qt 5.15. Enabling deprecation warnings there and cleaning those up will bring you a long way towards making your codebase compatible with Qt 6
- Some of the most used APIs that have been removed in Qt 5 have been moved into a Qt5CoreCompat module. It contains a couple of widely used classes that have been removed from Qt 6, such as QRegExp, QTextCodec, the old SAX parser for XML, and a few other items. The intention of this module is meant as a porting help and will not receive bug fixes, apart from regressions against Qt 5 and security related problems. We recommend that you use it for porting but then incrementally remove your dependencies to the modules
- If you want to start porting to Qt 6, we have a much more detailed porting guide in our documentation

Supported platforms:
- Qt has always been a cross-platform, and that will continue in Qt 6. Qt 6.0 supports:
- Windows 10
- macOS 10.14 and newer
- Linux (Ubuntu 20.04, CentOS 8.1, OpenSuSE 15.1)
- iOS 13 or newer
- Android (API level 23 or newer)
- On the embedded side, we support a wide range of embedded devices running Linux. Qt 6 does not yet support any of the embedded real-time operating systems supported in Qt 5. Both QNX and INTEGRITY have recently added support for C++17, and we plan to add support for them by the time we release Qt 6.2.

Outlook:
- Qt 6.0 does not yet support many of the add-on modules that can be found in Qt 5.15. This was intentionally decided to free up time to ensure that we could complete all the changes we needed to make for the Qt framework's essential modules
- We are now in the working on bringing most of those add-ons over to Qt 6. We have already done a lot of work, and we expect to have most add-ons supported again by the time we release Qt 6.2. Many add-ons already compile already against Qt 6, but they are not yet officially released as some cleanup work and refactoring remains to be done. We plan to have most of the important add-ons ported by the time we release Qt 6.2. Get a full overview of our add-on support in Qt 6.0 and beyond in the following blog post
- Apart from porting the missing add-ons over to Qt 6, a lot of our Qt 6 related work in the next coming months will focus on the stability of the new releases and taking the new property system into more wide-spread use within Qt itself
- We have adjusted our release timelines for Qt 6.1 and 6.2, and are planning to release Qt 6.1 already in April. After that, we plan to release our first long term supported version in the Qt 6 series, Qt 6.2 LTS, by the end of September
- And we are not yet done for this year, and you can also expect a brand new version of Qt Creator and Qt Design Studio to be released before Christmas! Both will come with full support for Qt 6

Kinza 6.7.1 (64-bit) 查看版本資訊

更新時間:2020-12-03
更新細節:

What's new in this version:

- Support for minor Chromium upgrade (87.0.4280.66 → 87.0.4280.88)

rekordbox 6.4.1 查看版本資訊

更新時間:2020-12-02
更新細節:

What's new in this version:

Fixed:
- Freeze when launching rekordbox for the first time
- Potential freeze when importing tracks during track analysis