{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":105944401,"defaultBranch":"master","name":"yugabyte-db","ownerLogin":"yugabyte","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2017-10-05T21:56:00.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/17074854?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1726816332.0","currentOid":""},"activityList":{"items":[{"before":"872b59e5371790141729bce864f6cf9374bc77f9","after":"252717b7fe47c5de8b95356ca954fb65b94ae30c","ref":"refs/heads/master","pushedAt":"2024-09-20T08:50:41.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"yorq","name":"Yuriy Shchetinin","path":"/yorq","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/90446433?s=80&v=4"},"commit":{"message":"[PLAT-12263] G-Flag upgrade fails for tmp_dir if Rolling restart used\n\nSummary:\nDuring gflags upgrade we don't track whether node has updated gflags or not.\nSo if we change tmp dir through rolling upgrade, our code will still treat tmp dir unchanged,\ntil the end of upgrade. This leads to failing ysql check (as it uses tmp dir for universes with ysql auth)\n\nSo current diff adds universe field to upgrade context, to keep track of already applied changes.\n\nTest Plan:\nsbt test\n\n1) Create univrse with ysql auth enabled.\n2) Change tmp_dir through gflags rolling upgrade\n3) Check successful\n\nReviewers: vbansal, svarshney\n\nReviewed By: svarshney\n\nSubscribers: yugaware\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D31851","shortMessageHtmlLink":"[PLAT-12263] G-Flag upgrade fails for tmp_dir if Rolling restart used"}},{"before":"294b7bbbfe65fe3e67e004cb6d6195c66855c0ae","after":"872b59e5371790141729bce864f6cf9374bc77f9","ref":"refs/heads/master","pushedAt":"2024-09-20T08:17:15.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"makalaaneesh","name":"Aneesh Makala","path":"/makalaaneesh","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6829754?s=80&v=4"},"commit":{"message":"[#23542] YSQL: Add new YSQL function yb_servers_metrics() to fetch metrics such as cpu/memory usage from all nodes in cluster\n\nSummary:\nTo enable adaptive parallelism in voyager, https://docs.google.com/document/d/1beD7zNtpmfYflXV1hVJ9mq_uqyCTJ9Es4titPEksSNE/edit#heading=h.3c3bf00hwf, a YSQL function yb_servers_metrics() is added\nwhich will fetch certain metrics for all nodes in the cluster. This allows voyager to monitor the state of the cluster, and adapt the parallelism while importing data to target YB cluster. A YSQL API is needed in\norder to provide deployment-agnostic API (not having to fetch metrics for YBA/YBM/on-prem using different mechanisms).\n\nAdditionally, made a few changes to `MetricsSnapshotter`\n- Introduced a function for GetCpuUsageInInterval(int ms).\n- made the GetCpuUsage function static.\n- Introduced a `GetMemoryUsage` function to get memory usage (from proc/meminfo for linux and sysctl for macos)\n\nSample output:\n\n```\nyugabyte=# select uuid, jsonb_pretty(metrics), status, error from yb_servers_metrics();\n uuid | jsonb_pretty | status | error\n----------------------------------+-----------------------------------------------------+--------+-------\n bf98c74dd7044b34943c5bff7bd3d0d1 | { +| OK |\n | \"memory_free\": \"0\", +| |\n | \"memory_total\": \"17179869184\", +| |\n | \"cpu_usage_user\": \"0.135827\", +| |\n | \"cpu_usage_system\": \"0.118110\", +| |\n | \"memory_available\": \"0\", +| |\n | \"tserver_root_memory_limit\": \"11166914969\", +| |\n | \"tserver_root_memory_soft_limit\": \"9491877723\",+| |\n | \"tserver_root_memory_consumption\": \"52346880\" +| |\n | } | |\n d105c3a6128640f5a25cc74435e48ae3 | { +| OK |\n | \"memory_free\": \"0\", +| |\n | \"memory_total\": \"17179869184\", +| |\n | \"cpu_usage_user\": \"0.135189\", +| |\n | \"cpu_usage_system\": \"0.119284\", +| |\n | \"memory_available\": \"0\", +| |\n | \"tserver_root_memory_limit\": \"11166914969\", +| |\n | \"tserver_root_memory_soft_limit\": \"9491877723\",+| |\n | \"tserver_root_memory_consumption\": \"55074816\" +| |\n | } | |\n a321e13e5bf24060a764b35894cd4070 | { +| OK |\n | \"memory_free\": \"0\", +| |\n | \"memory_total\": \"17179869184\", +| |\n | \"cpu_usage_user\": \"0.135827\", +| |\n | \"cpu_usage_system\": \"0.118110\", +| |\n | \"memory_available\": \"0\", +| |\n | \"tserver_root_memory_limit\": \"11166914969\", +| |\n | \"tserver_root_memory_soft_limit\": \"9491877723\",+| |\n | \"tserver_root_memory_consumption\": \"62062592\" +| |\n | } | |\n```\n\n**Upgrade/Rollback safety:**\nThis is a new YSQL function, so there won't be any prior users of this function. In case of an upgrade/rollback, the sql migration (that adds the function to pg_proc) will only run when the upgrade is being finalized (i.e. after all tservers are updated). Hence, it will not be possible to get errors due to a subset of tservers not being upgraded because the function itself will not be available to call.\n\nTest Plan: ./yb_build.sh --java-test 'org.yb.pgsql.TestYbServersMetrics#testYBServersMetricsFunction'\n\nReviewers: asaha, djiang, telgersma\n\nReviewed By: djiang, telgersma\n\nSubscribers: hbhanawat, yql, ybase, amakala\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D37267","shortMessageHtmlLink":"[#23542] YSQL: Add new YSQL function yb_servers_metrics() to fetch me…"}},{"before":"781228d3171af21c0e567056cb03da81351de6d8","after":"183f53cca39dc343abdafa9fcf749e6f7413b91f","ref":"refs/heads/2024.2","pushedAt":"2024-09-20T07:02:42.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"Lingeshwar","name":"Lingeshwar S","path":"/Lingeshwar","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/30400240?s=80&v=4"},"commit":{"message":"[BACKPORT 2024.2][PLAT-15280][PLAT-14753] - Pg parity improvements and other ui fixes\n\nSummary: Backport of [[ https://phorge.dev.yugabyte.com/D38191 | diff ]]\n\nTest Plan: Tested manually\n\nReviewers: kkannan\n\nReviewed By: kkannan\n\nSubscribers: ui, yugaware\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D38218","shortMessageHtmlLink":"[BACKPORT 2024.2][PLAT-15280][PLAT-14753] - Pg parity improvements an…"}},{"before":"4cdde38146c1aa7b3e61f6ccc1f8e0e792c2e1e1","after":"781228d3171af21c0e567056cb03da81351de6d8","ref":"refs/heads/2024.2","pushedAt":"2024-09-20T04:56:44.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"asharma-yb","name":"Ayush Sharma","path":"/asharma-yb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/109268085?s=80&v=4"},"commit":{"message":"[BACKPORT 2024.2][PLAT-14435]Fix args parsing in failure detection py script\n\nSummary:\nOriginal commit: D38213\nKey value parsing fails for values which contain `=`\nEg: For `--ysql_pg_conf_csv=\"shared_preload_libraries=passwordcheck\"`\nthe script fails with\n```\nTraceback (most recent call last):\n File \"/home/yugabyte/disk_io_failure_detection_py3.py\", line 157, in \n config_dict = parse_config_file(DEFAULT_HOME_DIR + process + CONF_PATH)\n File \"/home/yugabyte/disk_io_failure_detection_py3.py\", line 121, in parse_config_file\n key, value = line[2:].split('=')\nValueError: too many values to unpack (expected 2)\n```\nUpdate the script to split after the first `=`.\n\nTest Plan: This fix was tested in fidelity's env\n\nReviewers: vpatibandla\n\nReviewed By: vpatibandla\n\nSubscribers: yugaware\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D38214","shortMessageHtmlLink":"[BACKPORT 2024.2][PLAT-14435]Fix args parsing in failure detection py…"}},{"before":"fce234a598e2af143b2f85fbbb333d9d5db951c3","after":"7bc1368f4c1510aa5a1f7059d3d8acf26aba5ca8","ref":"refs/heads/2024.1","pushedAt":"2024-09-20T04:56:13.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"asharma-yb","name":"Ayush Sharma","path":"/asharma-yb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/109268085?s=80&v=4"},"commit":{"message":"[BACKPORT 2024.1][PLAT-14435]Fix args parsing in failure detection py script\n\nSummary:\nOriginal commit: D38213\nKey value parsing fails for values which contain `=`\nEg: For `--ysql_pg_conf_csv=\"shared_preload_libraries=passwordcheck\"`\nthe script fails with\n```\nTraceback (most recent call last):\n File \"/home/yugabyte/disk_io_failure_detection_py3.py\", line 157, in \n config_dict = parse_config_file(DEFAULT_HOME_DIR + process + CONF_PATH)\n File \"/home/yugabyte/disk_io_failure_detection_py3.py\", line 121, in parse_config_file\n key, value = line[2:].split('=')\nValueError: too many values to unpack (expected 2)\n```\nUpdate the script to split after the first `=`.\n\nTest Plan: This fix was tested in fidelity's env\n\nReviewers: vpatibandla\n\nReviewed By: vpatibandla\n\nSubscribers: yugaware\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D38215","shortMessageHtmlLink":"[BACKPORT 2024.1][PLAT-14435]Fix args parsing in failure detection py…"}},{"before":"90d4e933996bb47135535c62a7686d65ab533987","after":"294b7bbbfe65fe3e67e004cb6d6195c66855c0ae","ref":"refs/heads/master","pushedAt":"2024-09-20T04:55:24.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"asharma-yb","name":"Ayush Sharma","path":"/asharma-yb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/109268085?s=80&v=4"},"commit":{"message":"[PLAT-14435]Fix args parsing in failure detection py script\n\nSummary:\nKey value parsing fails for values which contain `=`\nEg: For `--ysql_pg_conf_csv=\"shared_preload_libraries=passwordcheck\"`\nthe script fails with\n```\nTraceback (most recent call last):\n File \"/home/yugabyte/disk_io_failure_detection_py3.py\", line 157, in \n config_dict = parse_config_file(DEFAULT_HOME_DIR + process + CONF_PATH)\n File \"/home/yugabyte/disk_io_failure_detection_py3.py\", line 121, in parse_config_file\n key, value = line[2:].split('=')\nValueError: too many values to unpack (expected 2)\n```\nUpdate the script to split after the first `=`.\n\nTest Plan: This fix was tested in fidelity's env.\n\nReviewers: vpatibandla\n\nReviewed By: vpatibandla\n\nSubscribers: yugaware\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D38213","shortMessageHtmlLink":"[PLAT-14435]Fix args parsing in failure detection py script"}},{"before":"f27e26c1ea1cad69d4be272a7b33b224bbeff901","after":"fce234a598e2af143b2f85fbbb333d9d5db951c3","ref":"refs/heads/2024.1","pushedAt":"2024-09-20T04:15:02.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"vipul-yb","name":"Vipul Bansal","path":"/vipul-yb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/86227026?s=80&v=4"},"commit":{"message":"[BACKPORT 2024.1][PLAT-15282]: fix NPE on k8s operator backup/restore status update\n\nSummary:\nIf any failed backup/restore exists in k8s operator resource, then status update for other b/r restore is failed with NPE as old failed resource does not have status.\nOriginal diff/commit: 56c9cc9/D38125\n\nTest Plan: Tested manually by creating failed backup which does not have status and veridfied the issue but after this change, b/r status were updated correctly as failed one are skipped.\n\nReviewers: anijhawan, vkumar\n\nReviewed By: vkumar\n\nSubscribers: yugaware\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D38169","shortMessageHtmlLink":"[BACKPORT 2024.1][PLAT-15282]: fix NPE on k8s operator backup/restore…"}},{"before":"b7f4056e48fe3ef0b9af3a8d8d6a9d01aed98e34","after":"8d4f15a4a4352d3009dab47df7a91ccbc6f2aaf0","ref":"refs/heads/2.20","pushedAt":"2024-09-20T04:13:45.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"vipul-yb","name":"Vipul Bansal","path":"/vipul-yb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/86227026?s=80&v=4"},"commit":{"message":"[BACKPORT 2.20][PLAT-15282]: fix NPE on k8s operator backup/restore status update\n\nSummary:\nIf any failed backup/restore exists in k8s operator resource, then status update for other b/r restore is failed with NPE as old failed resource does not have status.\n\nOriginal diff/commit: 56c9cc9/D38125\n\nTest Plan: Tested manually by creating failed backup which does not have status and veridfied the issue but after this change, b/r status were updated correctly as failed one are skipped.\n\nReviewers: anijhawan, vkumar\n\nReviewed By: vkumar\n\nSubscribers: yugaware\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D38168","shortMessageHtmlLink":"[BACKPORT 2.20][PLAT-15282]: fix NPE on k8s operator backup/restore s…"}},{"before":"2059eee2c8dad23dfce609bd7463fe6dca821755","after":"90d4e933996bb47135535c62a7686d65ab533987","ref":"refs/heads/master","pushedAt":"2024-09-20T03:04:10.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"spolitov","name":"Sergei Politov","path":"/spolitov","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/891914?s=80&v=4"},"commit":{"message":"[#24020] DocDB: Vector LSM\n\nSummary:\nInitial implementation of Vector LSM.\n\nIntroduces Vector LSM interface and some basic stubs.\nImplements insert of vector batch in multiple thread.\nSearch implementation is incomplete.\nJira: DB-12907\n\nTest Plan: Jenkins\n\nReviewers: mbautin, xCluster, hsunder\n\nReviewed By: mbautin\n\nSubscribers: ybase\n\nTags: #jenkins-ready\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D37877","shortMessageHtmlLink":"[#24020] DocDB: Vector LSM"}},{"before":"5da3c5fe537bb4c892c9cdc9501709309f9268f4","after":"b7f4056e48fe3ef0b9af3a8d8d6a9d01aed98e34","ref":"refs/heads/2.20","pushedAt":"2024-09-20T03:03:39.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"nkhogen","name":"Naorem Khogendro Singh","path":"/nkhogen","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/4497599?s=80&v=4"},"commit":{"message":"[BACKPORT 2.20][PLAT-15306] Update YBA node agent TLS certs to use strong ciphers\n\nSummary:\nOriginal diff - https://phorge.dev.yugabyte.com/D38047 (6990c7ad437586a50f363587b3f242b382ad2c4e)\n\nDefault TLS config for golang accepts even insecure cipher suites. This filters out the insecure ones.\n\nTest Plan:\nBefore the fix:\n\n```\nns-mbp-jmd6n:experiment nkhogen$ nmap -sV --script ssl-enum-ciphers -p 9070 10.9.75.20\nStarting Nmap 7.94 ( https://nmap.org ) at 2024-09-13 13:17 PDT\nNmap scan report for 10.9.75.20\nHost is up (0.036s latency).\n\nPORT STATE SERVICE VERSION\n9070/tcp open ssl/http Golang net/http server (Go-IPFS json-rpc or InfluxDB API)\n| ssl-enum-ciphers:\n| TLSv1.2:\n| ciphers:\n| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (secp256r1) - A\n| TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (secp256r1) - A\n| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (secp256r1) - A\n| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A\n| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (secp256r1) - A\n| TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) - A\n| TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) - A\n| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A\n| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A\n| TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA (secp256r1) - C\n| TLS_RSA_WITH_3DES_EDE_CBC_SHA (rsa 2048) - C\n| compressors:\n| NULL\n| cipher preference: server\n| warnings:\n| 64-bit block cipher 3DES vulnerable to SWEET32 attack\n| TLSv1.3:\n| ciphers:\n| TLS_AKE_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A\n| TLS_AKE_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A\n| TLS_AKE_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) - A\n| cipher preference: server\n|_ least strength: C\n\n```\nAfter the fix:\n\n```\nns-mbp-jmd6n:experiment nkhogen$ nmap -sV --script ssl-enum-ciphers -p 9070 10.9.142.135\nStarting Nmap 7.94 ( https://nmap.org ) at 2024-09-13 13:28 PDT\nNmap scan report for 10.9.142.135\nHost is up (0.034s latency).\n\nPORT STATE SERVICE VERSION\n9070/tcp open ssl/http Golang net/http server (Go-IPFS json-rpc or InfluxDB API)\n| ssl-enum-ciphers:\n| TLSv1.2:\n| ciphers:\n| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (secp256r1) - A\n| TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (secp256r1) - A\n| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (secp256r1) - A\n| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A\n| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (secp256r1) - A\n| TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) - A\n| TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) - A\n| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A\n| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A\n| compressors:\n| NULL\n| cipher preference: server\n| TLSv1.3:\n| ciphers:\n| TLS_AKE_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A\n| TLS_AKE_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A\n| TLS_AKE_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) - A\n| cipher preference: server\n|_ least strength: A\n```\n\nReviewers: nbhatia, sanketh\n\nReviewed By: nbhatia\n\nSubscribers: yugaware\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D38148","shortMessageHtmlLink":"[BACKPORT 2.20][PLAT-15306] Update YBA node agent TLS certs to use st…"}},{"before":"9b9465431e02bb702f57c98fdcd34cb8bd1dee3f","after":"f27e26c1ea1cad69d4be272a7b33b224bbeff901","ref":"refs/heads/2024.1","pushedAt":"2024-09-20T03:00:37.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"nkhogen","name":"Naorem Khogendro Singh","path":"/nkhogen","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/4497599?s=80&v=4"},"commit":{"message":"[BACKPORT 2024.1][PLAT-15306] Update YBA node agent TLS certs to use strong ciphers\n\nSummary:\nOriginal diff - https://phorge.dev.yugabyte.com/D38047 (6990c7ad437586a50f363587b3f242b382ad2c4e)\n\nDefault TLS config for golang accepts even insecure cipher suites. This filters out the insecure ones.\n\nTest Plan:\nBefore the fix:\n\n```\nns-mbp-jmd6n:experiment nkhogen$ nmap -sV --script ssl-enum-ciphers -p 9070 10.9.75.20\nStarting Nmap 7.94 ( https://nmap.org ) at 2024-09-13 13:17 PDT\nNmap scan report for 10.9.75.20\nHost is up (0.036s latency).\n\nPORT STATE SERVICE VERSION\n9070/tcp open ssl/http Golang net/http server (Go-IPFS json-rpc or InfluxDB API)\n| ssl-enum-ciphers:\n| TLSv1.2:\n| ciphers:\n| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (secp256r1) - A\n| TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (secp256r1) - A\n| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (secp256r1) - A\n| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A\n| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (secp256r1) - A\n| TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) - A\n| TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) - A\n| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A\n| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A\n| TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA (secp256r1) - C\n| TLS_RSA_WITH_3DES_EDE_CBC_SHA (rsa 2048) - C\n| compressors:\n| NULL\n| cipher preference: server\n| warnings:\n| 64-bit block cipher 3DES vulnerable to SWEET32 attack\n| TLSv1.3:\n| ciphers:\n| TLS_AKE_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A\n| TLS_AKE_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A\n| TLS_AKE_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) - A\n| cipher preference: server\n|_ least strength: C\n\n```\nAfter the fix:\n\n```\nns-mbp-jmd6n:experiment nkhogen$ nmap -sV --script ssl-enum-ciphers -p 9070 10.9.142.135\nStarting Nmap 7.94 ( https://nmap.org ) at 2024-09-13 13:28 PDT\nNmap scan report for 10.9.142.135\nHost is up (0.034s latency).\n\nPORT STATE SERVICE VERSION\n9070/tcp open ssl/http Golang net/http server (Go-IPFS json-rpc or InfluxDB API)\n| ssl-enum-ciphers:\n| TLSv1.2:\n| ciphers:\n| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (secp256r1) - A\n| TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (secp256r1) - A\n| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (secp256r1) - A\n| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A\n| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (secp256r1) - A\n| TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) - A\n| TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) - A\n| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A\n| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A\n| compressors:\n| NULL\n| cipher preference: server\n| TLSv1.3:\n| ciphers:\n| TLS_AKE_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A\n| TLS_AKE_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A\n| TLS_AKE_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) - A\n| cipher preference: server\n|_ least strength: A\n```\n\nReviewers: nbhatia, sanketh\n\nReviewed By: nbhatia\n\nSubscribers: yugaware\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D38147","shortMessageHtmlLink":"[BACKPORT 2024.1][PLAT-15306] Update YBA node agent TLS certs to use …"}},{"before":"487bc775128a346fc9c4c3fc21751a03551ca1bd","after":"2059eee2c8dad23dfce609bd7463fe6dca821755","ref":"refs/heads/master","pushedAt":"2024-09-20T02:15:18.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"SrivastavaAnubhav","name":"Anubhav Srivastava","path":"/SrivastavaAnubhav","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/17299377?s=80&v=4"},"commit":{"message":"[#24001] docdb: Replace tablet in tablegroup manager on repartition of colocated table\n\nSummary:\nDropping a colocated database table was failing (timing out) because we could not find the table in the tablegroup manager when trying to delete it. The error was: `Tablegroup { id: 00004002000030008000000000004003, db_id: 00004002000030008000000000000000, tablet_id: c36ddd84016b400daf39be900d8b74b3, tables: [] } does not contain a table with ID 00004002000030008000000000004000`.\n\nD37436 changed TablegroupManager to properly replace the colocated tablet by removing the existing tablegroup entry and adding a new one, but it missing adding the child tables.\n\nThis diff takes the simpler approach of simply changing the tablet pointed to in `TablegroupInfo`. These classes are\nprotected by the catalog manager mutex (see comments in class).\nJira: DB-12889\n\nTest Plan: Updated a test: `./yb_build.sh release --cxx-test integration-tests_minicluster-snapshot-test --gtest_filter *CloneYsqlSyntax/1`\n\nReviewers: mhaddad\n\nReviewed By: mhaddad\n\nSubscribers: ybase\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D38188","shortMessageHtmlLink":"[#24001] docdb: Replace tablet in tablegroup manager on repartition o…"}},{"before":"4d922ca55af52b84b8fe9436f1317008ee036e93","after":"487bc775128a346fc9c4c3fc21751a03551ca1bd","ref":"refs/heads/master","pushedAt":"2024-09-19T22:55:22.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"Jethro-M","name":null,"path":"/Jethro-M","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/88681329?s=80&v=4"},"commit":{"message":"[PLAT-15158 Update replication frequency tooltip\n\nSummary:\nThis diff adds a warning to the HA replication frequency field tooltip to inform users about triggering\nthe HA standby sync alert repeatedly if replication frequency is greater than the alert threshold.\n\nTest Plan:\n- Verify the expected message appears on the HA replication frequency tooltip.\n{F287109}\n\nReviewers: rmadhavan, muthu\n\nReviewed By: muthu\n\nSubscribers: yugaware\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D38183","shortMessageHtmlLink":"[PLAT-15158 Update replication frequency tooltip"}},{"before":"2963f7cf6f4bf29624796accd9bf20e1990a5279","after":"cf701bb7b7bbf3132b80492edbcbbb9c2c12962a","ref":"refs/heads/pg15-cherrypicks","pushedAt":"2024-09-19T21:13:32.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"karthik-ramanathan-3006","name":"Karthik Ramanathan","path":"/karthik-ramanathan-3006","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/32414766?s=80&v=4"},"commit":{"message":"[BACKPORT pg15-cherrypicks][#20908] YSQL: Introduce interface to optimize index non-key column updates\n\nSummary:\n**Conflict resolution for PG 15 cherry-pick**\n- src/postgres/src/backend/access/yb_access/yb_lsm.c:\n - Location: ybcinbuildCallback:\n - My master commit adds a new function `doAssignForIdxUpdate`\n - YB-PG 15 (b263910ba90dd273d10a8ecffa90b8affcdea529) has modified the params to adjacent function `ybcinbuildCallback` (added ‘ybctid’, removed ‘heapTuple’) + formatted the args\n - Merge resolution: Keep new function `doAssignForIdxUpdate`, keep YB-PG 15 params to adjacent function `ybcinbuildCallback`\n- src/postgres/src/include/access/amapi.h:\n - Location: struct IndexAmRoutine definition\n - My master commit adds a new field `ybamcanupdatetupleinplace`\n - Upstream PG (4d8a8d0c738410ec02aab46b1ebe1835365ad384) has added two new fields: `amusemaintenanceworkmem`, `amparallelvacuumoptions`\n - Merge resolution: Keep both changes\n- src/postgres/src/include/access/genam.h:\n - Location: Near index_insert, *index_delete\n - My master commit adds a new function `yb_index_update`.\n - YB-PG 15 (55782d561e55ef972f2470a4ae887dd791bb4a97) has formatted the params of `index_insert` and `index_delete` and renamed index_delete to `yb_index_delete`\n - Merge resolution: This is an adjacency conflict: keep new function `yb_index_update`, apply formatting for `index_insert` and `index_delete` and rename index_delete to `yb_index_delete`\n- src/postgres/src/include/executor/ybcModifyTable.h:\n - Location: Near YBCExecuteDeleteIndex\n - My master commit adds a new function `YBCExecuteUpdateIndex` (after `YBCExecuteDeleteIndex`)\n - YB-PG 15 has removed a new line after `YBCExecuteDeleteIndex` (473ebf820f8f7a52f3deeacc7800b09f03caff96)\n - Merge resolution: Keep new function `YBCExecuteUpdateIndex` and add new line.\n- src/postgres/src/include/utils/relcache.h:\n - Location: Near YbComputeIndexExprOrPredicateAttrs\n - My master commit adds a new function `CheckUpdateExprOrPred`\n - Upstream PG (e7eea52b2d61917fbbdac7f3f895e4ef636e935b) has added a new function `RelationGetIdentityKeyBitmap`, YB-PG 15 (55782d561e55ef972f2470a4ae887dd791bb4a97) has moved `CheckIndexForUpdate` to a new location lower down in the file.\n - Merge resolution: Keep PG function `RelationGetIdentityKeyBitmap`, move `CheckIndexForUpdate` to new location and add `CheckUpdateExprOrPred` below CheckIndexForUpdate\n- src/postgres/src/include/executor/executor.h:\n - Location: Near “prototypes from functions in execIndexing.c”\n - My master commit adds a new function `YbExecUpdateIndexTuples`.\n - YB-PG 15 has:\n - Changed parameters of `ExecInsertIndexTuples` (refer df512700dc50eb1403650f0a2158a74f143aa43e)\n - Deleted `ExecInsertIndexTuplesOptimized`, `ExecDeleteIndexTuplesOptimized`. (refer df512700dc50eb1403650f0a2158a74f143aa43e)\n - Reformatted `ExecCheckIndexConstraints` (refer 4abc34cc4d9caae40c32f17524f725c6bdd1c1bd)\n - Moved `ExecDeleteIndexTuples` to below check_exclusion_constraint (refer 55782d561e55ef972f2470a4ae887dd791bb4a97)\n - Merge resolution: This is an adjacency conflict, keep YB-PG 15 side of changes, add `YbExecUpdateIndexTuples` below `ExecDeleteIndexTuples`.\n- src/postgres/src/backend/executor/execIndexing.c:\n - Location: imports\n - My master commit adds a new import `catalog/pg_am_d.h`\n - YB-PG 15 has renamed the section to `Yugabyte includes`\n - Merge resolution: Keep new import, keep renamed comment `Yugabyte includes`\n - Location: ExecInsertIndexTuples (top half)\n - My master commit:\n - Removes the variables: applyNoDupErr, checkUnique, indexUnchanged, satisfiesConstraint\n - Removes the condition that checks if index is part of skip list.\n - Upstream PG (9dc718bdf2b1a574481a45624d42b674332e2903) has added the indexUnchanged variable.\n - Merge resolution: Move declaration into `YbExecDoInsertIndexTuple` due to the reasons below.\n - Location: ExecInsertIndexTuples (bottom half)\n - My master commit moves the index insert logic (from `FormIndexDatum()` to adding the index to a recheck list, both inclusive) into a helper function `YbExecDoInsertIndexTuple`.\n - Merge resolution: Keep master side of changes, copy over the following logic from current to `YbExecDoInsertIndexTuple` :\n - Computation of indexUnchanged + comment block above. This will require reformatting.\n - `satisfiesConstraint = index_insert(...)` . This will require reformatting.\n - Location: ExecDeleteIndexTuples\n - My master commit moves the index delete logic (from `FormIndexDatum()` to MemoryContextSwitchTo, both inclusive) into a helper function `YbExecDoDeleteIndexTuple`.\n - YB-PG 15(55782d561e55ef972f2470a4ae887dd791bb4a97) has changed index_delete to `yb_index_delete`.\n - Merge resolution: Change index_delete to `yb_index_delete` and move index delete logic into helper function `YbExecDoDeleteIndexTuple`, copy over `heapRelation` variable computation into `YbExecDoDeleteIndexTuple`.\n- src/postgres/src/backend/executor/nodeModifyTable.c:\n - Location: ExecUpdate:\n - My master commit replaces calls to `ExecDeleteIndexTuples` and `ExecInsertIndexTuples` (note: these functions were previously named *optimized) with a single call to `YbExecUpdateIndexTuples`.\n - YB-PG 15 has:\n - Moved ExecDeleteIndexTuples + ExecInsertIndexTuples into `ExecUpdateEpilogue`\n - Added a condition to return early in case of a cross partition update causing an adjacency conflict.\n - Frees the bitmapset `cols_marked_for_update` in `YBExecUpdateAct`.\n - Merge resolution:\n - Keep YB-PG 15 side of changes.\n - Create new variables `Bitmapset *cols_marked_for_update = NULL;` and `bool\tpk_is_updated = false;` in `ExecUpdate`.\n - Add new params `Bitmapset **cols_marked_for_update` , `bool *yb_is_pk_updated` to `YBExecUpdateAct`.\n - Replace all instances of `cols_marked_for_update` in `YBExecUpdateAct` with `*cols_marked_for_update`. (5 instances)\n - Replace all instances of `&cols_marked_for_update` in `YBExecUpdateAct` with `cols_marked_for_update` (1 instance)\n - Replace all instances of `is_pk_updated` in `YBExecUpdateAct` with `*is_pk_updated`. (2 instances)\n - Add new params `Bitmapset *yb_cols_marked_for_update`, `bool yb_is_pk_updated` to `ExecUpdateEpilogue`.\n - Replace the logic inside `if (YBCRelInfoHasSecondaryIndices(resultRelInfo) ...)` with logic my master commit.\n - Within this logic, replace estate with `context→estate` , planSlot with `context→planSlot` , cols_marked_for_update with `yb_cols_marked_for_update` , is_pk_updated with `yb_is_pk_updated` .\n - Free the bitmapset `cols_marked_for_update` below YbClearSkippableEntities in `ExecUpdate`.\n- src/postgres/src/backend/access/index/indexam.c:\n - Location: yb_index_update\n - Not a merge conflict, but the IndexAmRoutine field in struct RelationData has been renamed to `rd_indam`.\n - Apply changed name to yb_index_update.\n\nSummary of other changes to account for differences between PG 11 and PG 15:\n - src/postgres/src/backend/executor/execIndexing.c:\n - Location: YbExecDoInsertIndexTuple\n - `index_insert` no longer requires the HeapTuple as a param to be inserted. It is fetched from the slot.\n - Instead, the `tupleid` (ItemPointer) is now passed as a param instead of fetching the HeapTuple from the slot and then computing it.\n - An update hint is now passed as a param to compute if the index is unchanged in case of an update for a postgres relation. This is not applicable to Yugabyte relations\n - Location: ExecInsertIndexTuples\n - In reference to the above changes, the function call to `YbExecDoInsertIndexTuple` now requires the updated params.\n - Location: YbExecUpdateIndexTuples\n - The relation undergoing update is now referenced explicitly by the `ResultRelInfo` param rather than deriving it from the Estate. (refer upstream PG: 1375422c7826a2bf387be29895e961614f69de4b)\n - The `tupleid` (ItemPointer) is now passed as a param to the function instead of fetching the HeapTuple from the slot and then computing it.\n - `MakeSingleTupleTableSlot` now requires passing in the ops associated with the type of tuple in the slot. (refer upstream PG: 1a0586de3657cd35581f0639c87d5050c6197bb7)\n - IndexAmRoutine field in struct RelationData has been renamed from`rd_amroutine` to `rd_indam`.\n - The function call to `YbExecDoInsertIndexTuple` now requires the updated params.\n - src/postgres/src/backend/executor/nodeModifyTable.c:\n - Location: ExecUpdateEpilogue\n - In reference to the above changes, the function call to `YbExecUpdateIndexTuples` now requires the updated params.\n - The bitmapset of columns marked for update (named yb_cols_marked_for_update) and the bool of whether the primary key is updated are required to be passed into `YbExecUpdateIndexTuples` and so are required as params to `ExecUpdateEpilogue`.\n - Location: YBExecUpdateAct\n - In reference to the above changes, the bitmapset cols_marked_for_update now and bool is_pk_updated are needed to be accessed outside this function. So, they're passed as parameters to this function. The bitmapset is no longer freed in this function.\n - Location: ExecUpdate\n - In reference to the above changes, the bitmapset cols_marked_for_update now and bool is_pk_updated now have their lifetime for the duration of ExecUpdate. They are declared at the top of the function, and bms_free is invoked at the end.\n - Location: ExecMergeMatched\n - The YB specific params in the function call to `ExecUpdateEpilogue` are populated with default/fake values.\n - A note has been added indicating why (the MERGE command is not supported in Yugabyte yet).\n\nNotes for future self:\n - Make the parameters in YbExecDoIndexTuple consistent.\n - Evaluate slot usage and variables in ExecUpdate and related functions.\n\n**Background**\nPostgres updates tuples by deleting the existing copy of the tuple, and reinserting it with an updated value.\nThis applies to both relations (tables) as well as indexes. Thus, Postgres' generalized access method interface\nincludes definitions for inserts and deletes but not updates.\n\nDocDB supports the notion of in-place updates (PGSQL_UPDATE) when the key columns of the tuple remain unmodified.\nThis mechanism is leveraged while updating tuples of main tables when the primary key of the table remains unmodified.\nHowever, no such mechanism exists for updating index tuples.\n\n**Problem**\nPggate has a buffering mechanism for write requests which allows multiple write requests to be flushed at once, thus\nreducing the number of round trips between Postgres and DocDB. To preserve causality/ordering, this buffering mechanism\ndoes not allow multiple writes to the same tuple (identified by the tuple's CTID) to be enqueued together in the buffer.\nThe previous write to the tuple is flushed before enqueueing the next write.\nThis poses a problem when updating the non-key columns of indexes. For instance, when the included columns in a covering\nindex need to be updated, the update is modeled as DELETE + INSERT requests. Since there is no change to the key columns of\nthe index, this causes the update to require two flushes.\n\n**Solution**\nThis revision introduces an index update interface in postgres for facilitating in-place updates of index tuples in DocDB.\nThis replaces the need to perform a DELETE + INSERT on an index update when no key columns are modified.\nThis is useful in two scenarios:\n - Updating the INCLUDE columns in both unique and non-unique indexes\n - Updating the base table CTID in unique indexes\n\nPrior to updating the index tuple, the execution can check if the index access method supports in-place updates via the\na newly introduced indexAM field `ybamcanupdatetupleinplace`. Currently, in-place updates are only supported for LSM indexes.\n\nNote - The base table CTID is a part of the index key for non-unique indexes.\n\n**Example**\nConsider a table containing two covering indexes, one unique and one non-unique.\nThe example below displays\n```\nyugabyte=# \\d demo\n Table \"public.demo\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n k | integer | | not null |\n v1 | integer | | |\n v2 | integer | | |\n v3 | integer | | |\n v4 | integer | | |\nIndexes:\n \"demo_pkey\" PRIMARY KEY, lsm (k HASH)\n \"demo_v2_v4_idx\" UNIQUE, lsm (v2 HASH) INCLUDE (v4)\n \"demo_v1_v3_idx\" lsm (v1 HASH) INCLUDE (v3)\n\nyugabyte=# SET yb_explain_hide_non_deterministic_fields TO true;\n\n-- Query 1: Update of included columns in non-unique index\nyugabyte=# EXPLAIN (ANALYZE, DIST) UPDATE demo SET v3 = v3 + 1 WHERE k = 1;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------\n Update on demo (cost=0.00..4.12 rows=1 width=96) (actual rows=0 loops=1)\n -> Index Scan using demo_pkey on demo (cost=0.00..4.12 rows=1 width=96) (actual rows=1 loops=1)\n Index Cond: (k = 1)\n Storage Table Read Requests: 1\n Storage Table Rows Scanned: 1\n Storage Table Write Requests: 1\n Storage Index Write Requests: 1 // would have been 2 previously\n Storage Read Requests: 1\n Storage Rows Scanned: 1\n Storage Write Requests: 2\n Storage Flush Requests: 1 // would have been 2 previously\n(11 rows)\n\n-- Query 2: Update of primary key + included columns in unique index\nyugabyte=# EXPLAIN (ANALYZE, DIST) UPDATE demo SET k = k + 1, v4 = v3 + 1 WHERE k = 10;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------\n Update on demo (cost=0.00..4.12 rows=1 width=96) (actual rows=0 loops=1)\n -> Index Scan using demo_pkey on demo (cost=0.00..4.12 rows=1 width=96) (actual rows=1 loops=1)\n Index Cond: (k = 10)\n Storage Table Read Requests: 1\n Storage Table Rows Scanned: 1\n Storage Table Write Requests: 2\n Storage Index Write Requests: 3 // would have been 4 previously\n Storage Read Requests: 1\n Storage Rows Scanned: 1\n Storage Write Requests: 5\n Storage Flush Requests: 1 // would have been 2 previously\n(11 rows)\n```\nPreviously, both of the above cases would have required two flushes as the key columns of the respective indexes remain unchanged across the DELETE and INSERT requests.\n\n**Future Work**\n - The computation of whether key columns of an index are modified, can be performed at planning time.\n This can be done once D34040 lands.\n - Refactor ApplyUpdate in pgsql_operation.cc to delineate table update (main table, index) and sequence update operations.\nJira: DB-9891\n\nOriginal commit: f0a5db706e85/D36588\n\nTest Plan:\nRun the following tests:\n```\n./yb_build.sh --java-test 'org.yb.pgsql.TestPgRegressUpdateOptimized#schedule'\n```\n\nReviewers: jason, tfoucher\n\nReviewed By: jason\n\nSubscribers: yql, smishra, ybase\n\nTags: #jenkins-ready\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D38144","shortMessageHtmlLink":"[BACKPORT pg15-cherrypicks][#20908] YSQL: Introduce interface to opti…"}},{"before":"515b0df03a97e9bfe6259c05f8fe92e375d877eb","after":"2963f7cf6f4bf29624796accd9bf20e1990a5279","ref":"refs/heads/pg15-cherrypicks","pushedAt":"2024-09-19T21:07:50.000Z","pushType":"push","commitsCount":4,"pusher":{"login":"jasonyb","name":null,"path":"/jasonyb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/93959687?s=80&v=4"},"commit":{"message":"Merge branch 'pg15' into pg15-cherrypicks","shortMessageHtmlLink":"Merge branch 'pg15' into pg15-cherrypicks"}},{"before":"35452ccfde1628cab624ec899dcf81d09ab7e3f4","after":"5c623b9d8339028724fae92986162f24ae2cb126","ref":"refs/heads/pg15","pushedAt":"2024-09-19T20:30:32.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"timothy-e","name":"Timothy Elgersma","path":"/timothy-e","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/26841748?s=80&v=4"},"commit":{"message":"[pg15] test: Fix yb_bitmap_scan_joins regress test\n\nSummary:\nUpstream PG's 3d351d916b20534f973eda760cde17d96545d4c4 set the default reltuple count to -1. If reltuples is -1, we don't know how many tuples there are. If reltuples >= 0, we assume that is a reasonable estimate. `pg_yb_tablegroup` has its reltuples set to 0 by default, because that's how many rows are in the table.\n\nIf the planner expects that a scan will read 100% of the tuples, it does not allow a bitmap scan. This case was hit in this test. reltuples = 0 was clamped to reltuples = 1. The planner expected 1 row returned, which was the entire table, so bitmap scans were disallowed.\n\nCreating two tablegroups and analyzing the table (updating the reltuples count in pg_class) allows the planner to use a bitmap scan again.\n\nTest Plan:\nJenkins: test regex: .*testPgRegressYbBitmapScans.*\n\n```\npg15_tests/test_yb_bitmap_scans.sh\n./yb_build.sh --java-test 'org.yb.pgsql.TestPgRegressYbBitmapScans#testPgRegressYbBitmapScans'\n```\n\nNow has only `yb_bitmap_scans_system` failing.\n\n```\n[ERROR] Failures:\n[ERROR] TestPgRegressYbBitmapScans.testPgRegressYbBitmapScans:31->BasePgRegressTest.runPgRegressTest:64->BasePgRegressTest.runPgRegressTest:58->BasePgRegressTest.runPgRegressTest:48 pg_regress exited with error code: 1, failed tests: [yb_bitmap_scans_system]\n[INFO]\n[ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0\n```\n\nReviewers: jason, amartsinchyk\n\nReviewed By: jason\n\nSubscribers: yql\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D38137","shortMessageHtmlLink":"[pg15] test: Fix yb_bitmap_scan_joins regress test"}},{"before":"7a4b409766ea2180534f1ee7db4145038ac8ed07","after":"4d922ca55af52b84b8fe9436f1317008ee036e93","ref":"refs/heads/master","pushedAt":"2024-09-19T20:28:12.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"druzac","name":null,"path":"/druzac","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/865068?s=80&v=4"},"commit":{"message":"[#23922] docdb: Handle colocated tablets correctly in tablet limit checks.\n\nSummary: Currently the tablet limit guardrails treat colocated tables as any other create table request. That is, create table requests to create a colocated table will fail when the universe is close to or at its tablet limit. However, when creating a colocated table besides the first one, the request doesn't actually result in tablets being created and thus the request should succeed. This diff fixes this bug.\n\nTest Plan:\n```\n./yb_build.sh --with-tests --cxx-test-filter-re tablet_limits_integration_test --cxx-test tablet_limits_integration_test --gtest_filter 'CreateTableLimitTestRF1.CanAddColocatedTableAtLimit'\n./yb_build.sh --with-tests --cxx-test-filter-re tablet_limits_integration_test --cxx-test tablet_limits_integration_test --gtest_filter 'CreateTableLimitTestRF1.CannotCreateFirstTableInColocatedDatabaseAtLimit'\n```\n\nReviewers: mlillibridge\n\nReviewed By: mlillibridge\n\nSubscribers: ybase, slingam\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D38098","shortMessageHtmlLink":"[#23922] docdb: Handle colocated tablets correctly in tablet limit ch…"}},{"before":"1e70024c55613fbba062c2d1ef28d88356cb66f1","after":"7a4b409766ea2180534f1ee7db4145038ac8ed07","ref":"refs/heads/master","pushedAt":"2024-09-19T19:32:34.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"siddharth2411","name":null,"path":"/siddharth2411","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/43139012?s=80&v=4"},"commit":{"message":"[#23700] CDCSDK: Use leader epoch instead of leader term in table removal bg task\n\nSummary:\nCatalog manager background thread processes tables that needs to be removed from CDC streams. As part of the removal, we modify the stream metadata and persist in sys-catalog table. While persisting, we were using the leader ready term instead of the leader epoch fetched at the start of the background thread which has been fixed now.\nJira: DB-12608\n\nTest Plan: Existing cdc ctests\n\nReviewers: xCluster, hsunder, skumar, sumukh.phalgaonkar\n\nReviewed By: skumar\n\nSubscribers: ybase, ycdcxcluster\n\nTags: #jenkins-ready\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D38155","shortMessageHtmlLink":"[#23700] CDCSDK: Use leader epoch instead of leader term in table rem…"}},{"before":"5dc71ea99eddeccf5769939b642be59a9d2ae47b","after":"1e70024c55613fbba062c2d1ef28d88356cb66f1","ref":"refs/heads/master","pushedAt":"2024-09-19T19:17:25.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"aishwarya24","name":"Aishwarya Chakravarthy ","path":"/aishwarya24","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/5341399?s=80&v=4"},"commit":{"message":"[DOC-480] CDC metric description and voyager minor fixes (#24028)\n\n* minor fixes\r\n\r\n* changes from review","shortMessageHtmlLink":"[DOC-480] CDC metric description and voyager minor fixes (#24028)"}},{"before":"d6e717c879f442b313900ae38e2b6289a8abfa0e","after":"9b9465431e02bb702f57c98fdcd34cb8bd1dee3f","ref":"refs/heads/2024.1","pushedAt":"2024-09-19T18:47:50.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"pao214","name":"Bvsk Patnaik","path":"/pao214","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/16426043?s=80&v=4"},"commit":{"message":"[BACKPORT 2024.1][#22135] YSQL: Avoid read restart errors with ANALYZE\n\nSummary:\nOriginal commit: d298d4406060fa07aa9e714337bda1d4012ac98e / D37648\nIn the current state of the database, ANALYZE can run for a long time on large tables. This long duration increases the chances of errors. We want to minimize such error situations since running analyze again when there is an error can be expensive.\n\nFirst, we tackle the read restart errors. ANALYZE does not require strict read-after-commit-visibility guarantee, i.e. slightly stale reads are not an issue for the ANALYZE operation. Therefore, we want to avoid these errors for ANALYZE in particular. For this reason, we do not use an ambiguity window (i.e. collapse the ambiguity window to a single point) for ANALYZE.\n\nMoreover, in the current state of the database, DDLs are executed in a \"special\" transaction separate from the usual transaction code path. This means that multi-table ANALYZE operations such as `ANALYZE;` use a single read point for the entirety of the operation. This is undesirable since there may be a lot of tables in the database and that increases the risk of a snapshot too old error. For this reason, we explicitly pass a fresh read time for ANALYZE of each table from the Pg layer to the tserver proxy. Pg does not exhibit this problem since (a) it runs ANALYZE of each table in a separate transaction (b) it does not cleanup MVCC records that are in use.\nJira: DB-11062\n\nTest Plan:\nJenkins\n\n```\n./yb_build.sh --cxx-test pg_analyze_read_time-test --gtest_filter PgAnalyzeReadTimeTest.InsertRowsConcurrentlyWithAnalyze\n```\n\nInsert rows concurrently with analyze to trigger read restart errors.\n\n```\n./yb_build.sh --cxx-test pg_analyze_read_time-test --gtest_filter PgAnalyzeReadTimeTest.AnalyzeMultipleTables\n```\n\nAnalyze two tables and do a full compaction between the two analyze.\n\n```lang=sh\n$ ./bin/ysqlsh\nyugabyte=# create table keys(k int);\nCREATE TABLE\n... concurrently insert rows using ysql_bench and wait for a while\nyugabyte=# analyze keys;\n... fails with a read restart error prior to this change but not with this change.\n```\n\nTo insert rows concurrently use the following sql script\n\n```name=insert.sql,lang=sql\n\\set random_id random(1, 1000000)\nINSERT INTO keys (k) VALUES (:random_id);\n```\n\nRan ysql_bench using\n\n```lang=sh\nbuild/latest/postgres/bin/ysql_bench -t 100000 -f ../insert.sql -n -R 200\n```\n\nBackport-through: 2024.1\n\nReviewers: pjain, bkolagani, yguan\n\nReviewed By: bkolagani\n\nSubscribers: yql, steve.varnau, svc_phabricator, smishra, ybase\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D38178","shortMessageHtmlLink":"[BACKPORT 2024.1][#22135] YSQL: Avoid read restart errors with ANALYZE"}},{"before":"6784413b6557776c304a696fd1bc428b7ca3d805","after":"4cdde38146c1aa7b3e61f6ccc1f8e0e792c2e1e1","ref":"refs/heads/2024.2","pushedAt":"2024-09-19T18:42:00.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"pao214","name":"Bvsk Patnaik","path":"/pao214","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/16426043?s=80&v=4"},"commit":{"message":"[BACKPORT 2024.2][#24012] YSQL: Replace deprecated function shared_ptr::unique to fix macos compiler error\n\nSummary:\nOriginal commit: 9feb6e80715b965493e52ed156edda1e7fe80fa1 / D38164\nAn unknown change started causing compilation errors on macos. Fix these compiler errors.\n\nError 1: deprecated function error\n\n```\n/-------------------------------------------------------------------------------\n| COMPILATION FAILED\n|-------------------------------------------------------------------------------\n../../src/yb/util/debug/long_operation_tracker.cc:119:22: error: no member named 'unique' in 'std::shared_ptr'\n 119 | if (!operation.unique()) {\n | ~~~~~~~~~ ^\n../../src/yb/util/debug/long_operation_tracker.cc:124:24: error: no member named 'unique' in 'std::shared_ptr'\n 124 | if (!operation.unique()) {\n | ~~~~~~~~~ ^\n2 errors generated.\n\nInput files:\n src/yb/util/CMakeFiles/yb_util.dir/debug/long_operation_tracker.cc.o\n /Users/tfoucher/code/yugabyte-db3/src/yb/util/debug/long_operation_tracker.cc\nOutput file (from -o): src/yb/util/CMakeFiles/yb_util.dir/debug/long_operation_tracker.cc.o\n\\-------------------------------------------------------------------------------\n```\n\nError2: unused variable error\n\n```\n/-------------------------------------------------------------------------------\n| COMPILATION FAILED\n|-------------------------------------------------------------------------------\n../../src/yb/docdb/docdb_util.cc:46:19: error: unused variable 'kEmptyLogPrefix' [-Werror,-Wunused-const-variable]\n 46 | const std::string kEmptyLogPrefix;\n | ^~~~~~~~~~~~~~~\n1 error generated.\n\nInput files:\n src/yb/docdb/CMakeFiles/yb_docdb.dir/docdb_util.cc.o\n /Users/pbalivada/code/xcode/src/yb/docdb/docdb_util.cc\nOutput file (from -o): src/yb/docdb/CMakeFiles/yb_docdb.dir/docdb_util.cc.o\n\\-------------------------------------------------------------------------------\n```\n\nError 3: unused variable error\n\n```\n/-------------------------------------------------------------------------------\n| COMPILATION FAILED\n|-------------------------------------------------------------------------------\n../../src/yb/util/metrics-test.cc:64:21: error: unused variable 'kTableId' [-Werror,-Wunused-const-variable]\n 64 | static const string kTableId = \"table_id\";\n | ^~~~~~~~\n1 error generated.\n\nInput files:\n src/yb/util/CMakeFiles/metrics-test.dir/metrics-test.cc.o\n /Users/pbalivada/code/xcode/src/yb/util/metrics-test.cc\nOutput file (from -o): src/yb/util/CMakeFiles/metrics-test.dir/metrics-test.cc.o\n\\-------------------------------------------------------------------------------\n```\n\nFixes #24012\nJira: DB-12899\n\nTest Plan:\nJenkins: compile only\n\nBackport-through: 2.20\n\nReviewers: hsunder\n\nReviewed By: hsunder\n\nSubscribers: ybase, hsunder\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D38179","shortMessageHtmlLink":"[BACKPORT 2024.2][#24012] YSQL: Replace deprecated function shared_pt…"}},{"before":"903d793045baabea98cb2680d34def2041e7012c","after":"5dc71ea99eddeccf5769939b642be59a9d2ae47b","ref":"refs/heads/master","pushedAt":"2024-09-19T18:41:44.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"fizaaluthra","name":null,"path":"/fizaaluthra","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/20885072?s=80&v=4"},"commit":{"message":"[#23882] YSQL: Improve cache re-invalidation for alter table commands\n\nSummary:\nBackground:\n\nWhen DDL atomicity is enabled, DDL transaction verification may lead to additional schema version bumps\non an altered table.\n\nThis can create a scenario where if an ALTER TABLE operation increments the table’s schema version,\nand subsequently performs a scan on it, any following DMLs on the same table within the same session\nmay encounter a schema version mismatch error.\nThis happens because after the YB alter invalidates the table cache entry, the table scan reloads it.\nWhen DDL transaction verification bumps the schema version of the table again,\nthe previously reloaded table cache entry becomes invalid, and would need to be reloaded again.\n\nCommit 53477ae0f02a333542ab0e857310d93f24b85f96 introduced a re-invalidation mechanism\nto solve this problem.\n\nThis diff makes some changes to the re-invalidation mechanism:\n\n - Instead of using the YB alter table handles to keep track of the affected tables, simply use the table oids. Although the usage of statement handles doesn't seem to cause any issues on YB master, it causes issues on YB PG15 as upstream PG has changed the flow of some alter table commands. Specifically, the YB PG memory context (`YBCPgMemctx`) where the statement handles are allocated may be freed in an earlier catch block than the one that executes `YbATInvalidateTableCacheAfterAlter`.\n\n - In `YbATInvalidateTableCacheAfterAlter`, we now check if the relation still exists as we have to retrieve its database oid and relfilenode oid. This is necessary because legacy rewrite operations that alter the relation's oid might have dropped the old relation. If the relation has been dropped, there's no need to invalidate cache entries, as any queries referencing the dropped relation will fail anyway.\n\n - Commit 53477ae0f02a333542ab0e857310d93f24b85f96 added an optimization to skip schema version increments for alter type without rewrite. However, this approach may be flawed because YB currently drops and recreates dependent indexes when altering a column type, even when no rewrite occurs. This issue is tracked under #24007. For now, revert the changes to skip the schema version increment on the base table, so that we correctly track the relation as altered and execute the re-invalidation mechanism. Also remove the now unused variables 'rewriteState' and 'rewrite' from `YBCPrepareAlterTable` and `YBCPrepareAlterTableCmd`.\n\n - Add a function in ybc_pggate to invalidate the table cache entry for a given database oid and relfilenode oid.\nJira: DB-12786\n\nTest Plan: ./yb_build.sh --cxx-test pgwrapper_pg_ddl_atomicity-test --gtest_filter PgDdlAtomicityTest.TestTableCacheAfterTxnVerification\n\nReviewers: myang\n\nReviewed By: myang\n\nSubscribers: yql\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D38012","shortMessageHtmlLink":"[#23882] YSQL: Improve cache re-invalidation for alter table commands"}},{"before":"3d96551320a923bb00c33e690bb0682a9c563b16","after":"d6e717c879f442b313900ae38e2b6289a8abfa0e","ref":"refs/heads/2024.1","pushedAt":"2024-09-19T18:41:21.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"pao214","name":"Bvsk Patnaik","path":"/pao214","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/16426043?s=80&v=4"},"commit":{"message":"[BACKPORT 2024.1][#24012] YSQL: Replace deprecated function shared_ptr::unique to fix macos compiler error\n\nSummary:\nOriginal commit: 9feb6e80715b965493e52ed156edda1e7fe80fa1 / D38164\nAn unknown change started causing compilation errors on macos. Fix these compiler errors.\n\nError 1: deprecated function error\n\n```\n/-------------------------------------------------------------------------------\n| COMPILATION FAILED\n|-------------------------------------------------------------------------------\n../../src/yb/util/debug/long_operation_tracker.cc:119:22: error: no member named 'unique' in 'std::shared_ptr'\n 119 | if (!operation.unique()) {\n | ~~~~~~~~~ ^\n../../src/yb/util/debug/long_operation_tracker.cc:124:24: error: no member named 'unique' in 'std::shared_ptr'\n 124 | if (!operation.unique()) {\n | ~~~~~~~~~ ^\n2 errors generated.\n\nInput files:\n src/yb/util/CMakeFiles/yb_util.dir/debug/long_operation_tracker.cc.o\n /Users/tfoucher/code/yugabyte-db3/src/yb/util/debug/long_operation_tracker.cc\nOutput file (from -o): src/yb/util/CMakeFiles/yb_util.dir/debug/long_operation_tracker.cc.o\n\\-------------------------------------------------------------------------------\n```\n\nError2: unused variable error\n\n```\n/-------------------------------------------------------------------------------\n| COMPILATION FAILED\n|-------------------------------------------------------------------------------\n../../src/yb/docdb/docdb_util.cc:46:19: error: unused variable 'kEmptyLogPrefix' [-Werror,-Wunused-const-variable]\n 46 | const std::string kEmptyLogPrefix;\n | ^~~~~~~~~~~~~~~\n1 error generated.\n\nInput files:\n src/yb/docdb/CMakeFiles/yb_docdb.dir/docdb_util.cc.o\n /Users/pbalivada/code/xcode/src/yb/docdb/docdb_util.cc\nOutput file (from -o): src/yb/docdb/CMakeFiles/yb_docdb.dir/docdb_util.cc.o\n\\-------------------------------------------------------------------------------\n```\n\nError 3: unused variable error\n\n```\n/-------------------------------------------------------------------------------\n| COMPILATION FAILED\n|-------------------------------------------------------------------------------\n../../src/yb/util/metrics-test.cc:64:21: error: unused variable 'kTableId' [-Werror,-Wunused-const-variable]\n 64 | static const string kTableId = \"table_id\";\n | ^~~~~~~~\n1 error generated.\n\nInput files:\n src/yb/util/CMakeFiles/metrics-test.dir/metrics-test.cc.o\n /Users/pbalivada/code/xcode/src/yb/util/metrics-test.cc\nOutput file (from -o): src/yb/util/CMakeFiles/metrics-test.dir/metrics-test.cc.o\n\\-------------------------------------------------------------------------------\n```\n\nFixes #24012\nJira: DB-12899\n\nTest Plan:\nJenkins: compile only\n\nBackport-through: 2.20\n\nReviewers: hsunder\n\nReviewed By: hsunder\n\nSubscribers: hsunder, ybase\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D38180","shortMessageHtmlLink":"[BACKPORT 2024.1][#24012] YSQL: Replace deprecated function shared_pt…"}},{"before":"52074cc1a609a188254acc4d10d18d0bdf57dd10","after":"5da3c5fe537bb4c892c9cdc9501709309f9268f4","ref":"refs/heads/2.20","pushedAt":"2024-09-19T18:23:48.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"pao214","name":"Bvsk Patnaik","path":"/pao214","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/16426043?s=80&v=4"},"commit":{"message":"[BACKPORT 2.20][#24012] YSQL: Replace deprecated function shared_ptr::unique to fix macos compiler error\n\nSummary:\nOriginal commit: 9feb6e80715b965493e52ed156edda1e7fe80fa1 / D38164\nAn unknown change started causing compilation errors on macos. Fix these compiler errors.\n\nError 1: deprecated function error\n\n```\n/-------------------------------------------------------------------------------\n| COMPILATION FAILED\n|-------------------------------------------------------------------------------\n../../src/yb/util/debug/long_operation_tracker.cc:119:22: error: no member named 'unique' in 'std::shared_ptr'\n 119 | if (!operation.unique()) {\n | ~~~~~~~~~ ^\n../../src/yb/util/debug/long_operation_tracker.cc:124:24: error: no member named 'unique' in 'std::shared_ptr'\n 124 | if (!operation.unique()) {\n | ~~~~~~~~~ ^\n2 errors generated.\n\nInput files:\n src/yb/util/CMakeFiles/yb_util.dir/debug/long_operation_tracker.cc.o\n /Users/tfoucher/code/yugabyte-db3/src/yb/util/debug/long_operation_tracker.cc\nOutput file (from -o): src/yb/util/CMakeFiles/yb_util.dir/debug/long_operation_tracker.cc.o\n\\-------------------------------------------------------------------------------\n```\n\nError2: unused variable error\n\n```\n/-------------------------------------------------------------------------------\n| COMPILATION FAILED\n|-------------------------------------------------------------------------------\n../../src/yb/docdb/docdb_util.cc:46:19: error: unused variable 'kEmptyLogPrefix' [-Werror,-Wunused-const-variable]\n 46 | const std::string kEmptyLogPrefix;\n | ^~~~~~~~~~~~~~~\n1 error generated.\n\nInput files:\n src/yb/docdb/CMakeFiles/yb_docdb.dir/docdb_util.cc.o\n /Users/pbalivada/code/xcode/src/yb/docdb/docdb_util.cc\nOutput file (from -o): src/yb/docdb/CMakeFiles/yb_docdb.dir/docdb_util.cc.o\n\\-------------------------------------------------------------------------------\n```\n\nError 3: unused variable error\n\n```\n/-------------------------------------------------------------------------------\n| COMPILATION FAILED\n|-------------------------------------------------------------------------------\n../../src/yb/util/metrics-test.cc:64:21: error: unused variable 'kTableId' [-Werror,-Wunused-const-variable]\n 64 | static const string kTableId = \"table_id\";\n | ^~~~~~~~\n1 error generated.\n\nInput files:\n src/yb/util/CMakeFiles/metrics-test.dir/metrics-test.cc.o\n /Users/pbalivada/code/xcode/src/yb/util/metrics-test.cc\nOutput file (from -o): src/yb/util/CMakeFiles/metrics-test.dir/metrics-test.cc.o\n\\-------------------------------------------------------------------------------\n```\n\nFixes #24012\nJira: DB-12899\n\nTest Plan:\nJenkins: compile only\n\nBackport-through: 2.20\n\nReviewers: hsunder\n\nReviewed By: hsunder\n\nSubscribers: ybase, hsunder\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D38181","shortMessageHtmlLink":"[BACKPORT 2.20][#24012] YSQL: Replace deprecated function shared_ptr:…"}},{"before":"8d228a8a027d341f0e208abd3bf66717f45ccdc1","after":"903d793045baabea98cb2680d34def2041e7012c","ref":"refs/heads/master","pushedAt":"2024-09-19T17:35:59.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"Vars-07","name":"Shubham Varshney","path":"/Vars-07","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/19248012?s=80&v=4"},"commit":{"message":"[PLAT-15328] Configure cgroup for non rhel9 machines as part of provision\n\nSummary: v1 cGroup implementation requires sudo access (this is used in non rhel9 machines). Given that we have moved the systemd configuration as part of configure phase, this was failing. With this diff we move the cgroup configuration for non rhel 9 machines during provision phase.\n\nTest Plan:\nCreated rhel8/9 universes on azure with cgroup value configured.\niTest pipeline\n\nReviewers: dshubin, daniel, sanketh\n\nReviewed By: dshubin, daniel\n\nSubscribers: yugaware\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D38166","shortMessageHtmlLink":"[PLAT-15328] Configure cgroup for non rhel9 machines as part of provi…"}},{"before":"64ac031e1ea64b27c8cbc67cf8b80f1cbb4f3509","after":"8d228a8a027d341f0e208abd3bf66717f45ccdc1","ref":"refs/heads/master","pushedAt":"2024-09-19T16:54:49.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"myang2021","name":null,"path":"/myang2021","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/79478033?s=80&v=4"},"commit":{"message":"[#23923] YSQL: Fix DDL atomicity check failure\n\nSummary:\nThe unit test Colocation/YbAdminSnapshotScheduleTestWithYsqlParam.PgsqlAddUniqueConstraint/1\nin asan build fails a lot due to a newly added assertion by commit\na6981f20f910d2ad61051e3190c5cee768dacd1b like:\n\n```\nF20240916 20:59:08 ../../src/yb/master/catalog_entity_info.cc:531] Check failed: !l->is_being_created_by_ysql_ddl_txn()\n```\n\nAfter debugging, I found that the assertion failure is related to a DDL statement\nthat does alter table add a unique constraint. It creates an index for\nthe base table. The DDL transaction involves two tables: the base table\nand the index. If we compare the base table's schema to decide whether\nthe PG side of the DDL transaction has committed or aborted, we will\nfind that the base table's schema does not change. So it is not possible\nto use base table's schema to make a decision of the transaction's\ncommit/abort state. In this case we must use the index for schema\ncomparison because it is newly created.\n\nThe bug is that the base table's schema is used for schema comparison\nand we returned std::nullopt to signify that we do not know whether the\nDDL transaction is committed or aborted. However, the std::nullopt is\nnot only used by the base table, but also used by the index to call\n`TableInfo::IsBeingDroppedDueToDdlTxn` because there are two tables in\nthe DDL transaction and we just loop through them using the same\nstd::nullopt. For the index, it is being created so the first DCHECK\nbelow fails:\n\n```\n if (!txn_success.has_value()) {\n // This means the DDL txn only increments the schema version and does not change\n // the DocDB schema at all. It cannot be one of the following 2 cases.\n DCHECK(!l->is_being_created_by_ysql_ddl_txn());\n DCHECK(!l->is_being_deleted_by_ysql_ddl_txn());\n return false;\n }\n```\n\nI fixed the DCHECK failure by changing the API of `TableInfo::IsBeingDroppedDueToDdlTxn` to\nnot use std::optional for its argument `txn_success`. The function used to\ntake a simple `bool txn_success` as argument. I get it back and only call the\nfunction when the argument isn't `std::nullopt`. In other words, I now only call\nthe function when the caller has already figured out the DDL transaction's\ncommit/abort status.\n\nAdded a new unit test that would fail the assertion before the fix.\n\nNOTE: this diff only fixes the DCHECK failure so that we are not worse than before.\nHowever we are not better than before either. There is still a flaw with the current\nDDL atomicity work flow based upon schema comparison when the DDL transaction\ninvolves multiple tables. Only the first table's schema is used for comparison\non the grounds that we only need to compare one table's schema against PG catalog\nto determine the DDL transaction's committed/aborted status. While this assumption\nis generally true, it fails when the first table's schema does not change and only involves\na schema version increment. Currently it is not straightforward to revise the work flow\nto pick the right table which can be used to tell the DDL transaction's status based upon\nschema comparison. Therefore if the PG side has aborted the DDL transaction, schema\ncomparison based method will conclude that there is no schema change of the base table\nand therefore it simply clears the DDL verification state for **both the base table and the index**,\nwhich is equivalent to a successful commit. This isn't right and the DocDB index will be left\nas garbage that isn't cleaned up. This issue is tracked separately by https://github.com/yugabyte/yugabyte-db/issues/23988\nJira: DB-12825\n\nTest Plan:\n./yb_build.sh debug --cxx-test pg_ddl_atomicity-test --gtest_filter PgDdlAtomicityTest.TestAlterTableAddUniqueConstraint -n 10\n\n./yb_build.sh asan --cxx-test tools_yb-admin-snapshot-schedule-test --gtest_filter Colocation/YbAdminSnapshotScheduleTestWithYsqlParam.PgsqlAddUniqueConstraint/1 --clang17 -n 10 --tp 1\n\nReviewers: fizaa\n\nReviewed By: fizaa\n\nSubscribers: ybase, yql\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D38107","shortMessageHtmlLink":"[#23923] YSQL: Fix DDL atomicity check failure"}},{"before":"fb1969b36b85514afbf66e51ac66c2770db22821","after":"3d96551320a923bb00c33e690bb0682a9c563b16","ref":"refs/heads/2024.1","pushedAt":"2024-09-19T16:44:44.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"myang2021","name":null,"path":"/myang2021","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/79478033?s=80&v=4"},"commit":{"message":"[BACKPORT 2024.1][#23786] YSQL: add yb_make_next_ddl_statement_nonincrementing to YbDbAdminVariables\n\nSummary:\nThe yb_make_next_ddl_statement_nonincrementing has PGC_SUSET and needs to be\nadded to YbDbAdminVariables in order for yb_db_admin role to set it. Otherwise,\nwe see permission denied error for yb_db_admin:\n\n```\nyugabyte=# set role yb_db_admin;\nSET\nyugabyte=> set yb_make_next_ddl_statement_nonincrementing to true;\nERROR: permission denied to set parameter \"yb_make_next_ddl_statement_nonincrementing\"\n```\n\nI added yb_make_next_ddl_statement_nonincrementing to YbDbAdminVariables, and\nthe above error is gone\n\n```\nyugabyte=# set role yb_db_admin;\nSET\nyugabyte=> set yb_make_next_ddl_statement_nonincrementing to true;\nSET\n```\nJira: DB-12689\n\nOriginal commit: 2beb87297e96171ba5071a185bb0890b87001d83 / D38135\n\nTest Plan: ./yb_build.sh --cxx-test pg_catalog_version-test --gtest_filter PgCatalogVersionTest.NonIncrementingDDLMode\n\nReviewers: kfranz\n\nReviewed By: kfranz\n\nSubscribers: yql\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D38174","shortMessageHtmlLink":"[BACKPORT 2024.1][#23786] YSQL: add yb_make_next_ddl_statement_noninc…"}},{"before":"fbef568133a6f8e9248ee887e5982f57edf90357","after":"64ac031e1ea64b27c8cbc67cf8b80f1cbb4f3509","ref":"refs/heads/master","pushedAt":"2024-09-19T16:34:08.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"mdbridge","name":"Mark","path":"/mdbridge","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/15827795?s=80&v=4"},"commit":{"message":"[#23978] xCluster: set up sequences_data stream(s) on target universe\n\nSummary:\nThis is part of the new sequences replication feature.\n\nHere we do the following when setting up sequences replication on the target universe:\n * create the sequences_data table if it does not already exist\n * connect to the corresponding stream for it from the source universe for each namespace we are replicating\n\nChecking that xCluster safe time is computed correctly will be done in a later diff.\n\nThis feature is gated by new flag, `--TEST_xcluster_enable_sequence_replication` for now as well as automatic mode replication being on.\n\nThis feature is only usable with DB-scoped replication.\n\nMore detail on changes:\n * in order to deal with the fact that the code assumes that table IDs are unique across namespaces, I introduce the idea of a *sequences_data alias*. This is a table ID that looks like `0000ffff00003000800000000000ffff.sequences_data_for.00004001000030008000000000000000`\n * this table ID denotes the sequences_data table, of which there is one per universe, to code outside of xCluster\n * xCluster, however, can distinguish the aliases for different name spaces (the namespace is the last part in the alias) and extract the namespace that alias corresponds to\n * in this way, we can pretend, at least for xCluster, that sequences_data is really a series of tables, one per namespace\n\n * added code for creating aliases of sequences_data TableId's\n * modified selected catalog manager routines to accept such aliases, operating on the actual sequences_data table\n * refactor `GetTablesEligibleForXClusterReplication` so it just returns information about the tables' naming instead of a full-blown COW entity that is hard to modify\n * alter now sets the field `automatic_ddl_mode` of `SetupUniverseReplicationRequestPB` to convey to the target universe that we are in automatic mode\n * modify inbound replication group setup task so for alter cases the existing automatic DDL mode and is_db_scope settings rather than the incoming set up request\n * discrepancies for automatic DDL mode return status errors\n * fixed code that used to test `is_db_scoped_` to now instead check `!source_namespace_ids_.empty()`, which is what the check actually did before I fixed is_db_scoped_ to mean what it name implies\n * (that is what is_db_scoped_ was previously incorrectly set to)\n * To allow tests using ybadmin to work even though we don't have a way of turning automatic mode using ybadmin parameters, introduced a temporary new flag `--TEST_force_automatic_ddl_replication_mode` that causes ybadmin or YBA to get automatic replication mode when it asks for (semiautomatic) DB-scoped mode.\n\nFixes #23978\nJira: DB-12879\n\nTest Plan:\nThe xCluster integration tests that test DB-scoped replication include the following. I have modified these to test both automatic and semi-automatic modes as follows:\n * xcluster_outbound_replication_group-itest\n * most parameterized by mode; some use default mode\n * xcluster_db_scoped-test\n * a few parameterized by mode; some use default mode\n * xcluster_ddl_replication-test\n * always use automatic mode\n * xcluster_ddl_replication_pgregress-test\n * always use automatic mode\n * xcluster_ysql_index-test\n * a couple of tests use default mode; others don't use DB-scoped replication\n\nDefault mode is currently false to match production, but I have tested that everything passes with it switched to true. Julian, you may want to parameterize more or different tests as you add the extension to automatic mode.\n\nThere are various changes to the tests besides allowing automatic versus semi automatic mode testing:\n * in automatic mode for xcluster_db_scoped-test, I ensure that the source and target universes have different namespace IDs\n * (this makes the sequence aliases for the two universes different, making sure the code doesn't get the wrong universe namespace ID when making an alias)\n * I fixed a bunch of bugs where we were using the incorrect universe's namespace or client\n * I verify that the sequences_data table gets created on the target\n\nI am not verifying that the sequences_data streams are connected to each other correctly or that data can flow through them in this diff. I will do that in the diff where I fix the pollers, which will allow me to flow data through these steams correctly.\n\nTesting the ybadmin sensing commands. I used the following setup:\n```\n~/code/yugabyte-db/bin/yb-ctl start --data_dir ~/yugabyte-data1 --ip_start 10 --master_flags \"v=2,enable_xcluster_api_v2=true,allowed_preview_flags_csv=enable_xcluster_api_v2,TEST_xcluster_enable_sequence_replication=true,TEST_xcluster_enable_ddl_replication=true,TEST_force_automatic_ddl_replication_mode=true\"\n~/code/yugabyte-db/bin/yb-ctl start --data_dir ~/yugabyte-data2 --ip_start 20 --master_flags \"v=2,enable_xcluster_api_v2=true,allowed_preview_flags_csv=enable_xcluster_api_v2,TEST_xcluster_enable_sequence_replication=true,TEST_xcluster_enable_ddl_replication=true\"\n\n~/code/yugabyte-db/bin/ysqlsh -h 127.0.0.10 -c 'CREATE SEQUENCE mdl_sequence INCREMENT 42 START 13;'\n~/code/yugabyte-db/bin/ysqlsh -h 127.0.0.20 -c 'CREATE SEQUENCE mdl_sequence INCREMENT 42 START 13;'\n\n# yugabyte database and mdl_sequence have the same OID's on both sides at this point\n\n~/code/yugabyte-db/bin/ysqlsh -h 127.0.0.10 <#23978] xCluster: set up sequences_data stream(s) on target universe"}},{"before":"54793c8cda671bda42eadfe1ee29c36f89d5f424","after":"fbef568133a6f8e9248ee887e5982f57edf90357","ref":"refs/heads/master","pushedAt":"2024-09-19T16:26:31.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"shahrooz1997","name":"Hamidreza Zare","path":"/shahrooz1997","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/23063104?s=80&v=4"},"commit":{"message":"[PLAT-15378][localProvider][dr] Deflake testDrConfigSetup local provider test\n\nSummary:\nThis diff adds retry logic around the check for number of rows in the target universe in an xCluster setup. Because an xCluster replication is asynchronous, it retries for 1 minute to check if the inserted row on the source universe shows up on the target universe.\n\nIt also disables `com.yugabyte.yw.commissioner.tasks.local.DRDbScopedLocalTest#testDrDbScopedUpdate` local provider test.\n\nTest Plan:\n- Made sure the retry logic works properly when the delay is 10ms.\n- Made sure testDrConfigSetup passes\n\nReviewers: cwang, vbansal, sanketh\n\nReviewed By: vbansal\n\nSubscribers: yugaware\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D38190","shortMessageHtmlLink":"[PLAT-15378][localProvider][dr] Deflake testDrConfigSetup local provi…"}},{"before":"0a6a31e199dd240c67dfa856827aee139ae66bed","after":"54793c8cda671bda42eadfe1ee29c36f89d5f424","ref":"refs/heads/master","pushedAt":"2024-09-19T14:48:45.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"druzac","name":null,"path":"/druzac","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/865068?s=80&v=4"},"commit":{"message":"[#22925] docdb: Persist tserver registry entries to sys catalog\n\nSummary:\nAdds writes for the ts descriptor objects before committing on all code paths.\n\nFlags added:\n - `persist_tserver_registry` - this is a test flag. It will be changed into an autoflag in a later diff. It toggles whether to write the tserver registry entries to the sys catalog when adding a new one. It also toggles whether to reload the tserver registry entries from disk when reloading the sys catalog.\n\nJira: DB-11843\n\nTest Plan:\n```\n./yb_build.sh --cxx-test master_heartbeat-itest --gtest_filter 'MasterHeartbeatITest.TestRegistrationThroughRaftPersisted'\n./yb_build.sh --cxx-test master-test --gtest_filter 'MasterTest.TestRegistrationThroughHeartbeatPersisted'\n./yb_build.sh --cxx-test master-test --gtest_filter 'MasterTest.TestUnresponsiveMarkingPersisted'\n./yb_build.sh --cxx-test master-test --gtest_filter 'MasterTest.TestHeartbeatFromRegisteredTSPersisted'\n```\n\nReviewers: asrivastava, amitanand\n\nReviewed By: asrivastava\n\nSubscribers: ybase, yql, slingam\n\nDifferential Revision: https://phorge.dev.yugabyte.com/D37279","shortMessageHtmlLink":"[#22925] docdb: Persist tserver registry entries to sys catalog"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEu70tyQA","startCursor":null,"endCursor":null}},"title":"Activity · yugabyte/yugabyte-db"}