From Mateju.Miroslav at azd.cz Tue Aug 9 08:38:52 2016 From: Mateju.Miroslav at azd.cz (=?utf-8?B?TWF0xJtqxa8gTWlyb3NsYXYsIEluZy4=?=) Date: Tue, 9 Aug 2016 12:38:52 +0000 Subject: [CDash] Optimizing build2test table In-Reply-To: References: <856010006cd14dd5be99779cc2d04dce@azdexchstore3.AZD.LOCAL> Message-ID: Hi, I have finally found the bottleneck of my CDash server. It is related to the build2test table as I expected. The performance limitation is caused by a suboptimal index. Most types of queries to build2test table that can be found in CDash sources do filter the rows using WHERE buildid=? AND testid=? Such queries cannot be supported by existing separate indexes buildid and testid but a multi-column index buildid+testid will cover them. (See https://www.percona.com/blog/2009/09/19/multi-column-indexes-vs-index-merge/) So I changed the buildid index to include also testid column on my testing CDash installation. The index size of build2test table rose by approx. 1 GiB to 5.6 GiB (while data size is 3.3 GiB). From that point, there are much less slow_queries and any CDash submissions are handled much faster: The real time of my CTest run (with several builds and thousands of tests) using synchronous submission decreased from 152 min to 69 min and no submission exceeds the processing timeout (unlike before). The processing delay of an asynchronous submission series slightly decreased (max. 0.53 hours, compared to more than one hour). Should I create a patch or is anyone more experienced in the CDash source able to take care of it? I also have a few side notes which arose when I was examining the problem above: There is no tag corresponding to the current release 2.2.3 in the official Git repo. I tried master branch but it looks very different from the 2.2.x releases. The file monitor.php is causing most of the remaining slow queries. The function echo_project_data_sizes() contains pretty complicated queries and makes monitor.php practically useless (i.e. unresponsive) or even harmful when used on a busy DB: It?s slowing down even more and possibly causes exceeding of the processing timeout. This problem disappears when I comment out the call to echo_project_data_sizes(). This function should have its own page and possibly a warning message that it may take long to load and reduce performance. Best regards, Miroslav Ing. Miroslav Mat?j? Programmer Analyst A?D Praha s.r.o. Technology Division Research and Development ?irovnick? 2/3146, 106 17 Prague Czech Republic Phone: +420 267 287 476 Web: www.azd.cz From: Zack Galbreath [mailto:zack.galbreath at kitware.com] Sent: Thursday, May 19, 2016 4:33 PM To: Mat?j? Miroslav, Ing. Cc: cdash at public.kitware.com Subject: Re: [CDash] Duplicate rows in build2test On Thu, May 19, 2016 at 9:40 AM, Mat?j? Miroslav, Ing. > wrote: Hi, I am sending a few updates regarding the build2test problem. I copied my DB to a virtual machine and performed a few experiments to reduce the size of build2test table. I was experimenting with an older backup version with the size of 9.3 GiB. (The DB on my production server has 17.5 GiB currently while the single table build2test has 11.7 GiB.) I tried to remove the duplicates using the method from https://stackoverflow.com/a/3312066/711006. First of all, I had to convert the table to MyISAM because of an InnoDB bug (https://stackoverflow.com/a/8053812/711006). The MyISAM version has about one half of the size of InnoDB version. There were 1.9 million duplicate rows out of 67.6 M and the data size decreased slightly from 1.3 GiB to 1.2 GiB while creation of the new unique index caused the index grow from 2.3 GiB to 3.2 GiB. I have not tried to convert the table back to InnoDB yet but I would expect similar inflation of index. So I would not recommend to create the unique index anymore, at least until I check it with InnoDB. I also found that the column status of type varchar(10) contains just three values. Even the command SELECT status FROM build2test PROCEDURE ANALYSE()\G recommended to change the type to ENUM('failed','notrun','passed') NOT NULL I tried it and the table size decreased from 9.3 GiB to 8.5 GiB reducing the size of both data and index, at least for InnoDB. However, the MyISAM version grew slightly (from 4.4 GiB to 5 GiB). I am going to try more optimization methods if time allows. Thanks for digging into this. I certainly appreciate it. Often times I find that you need to run OPTIMIZE TABLE to reclaim disk space after making big changes to the database. Consider giving that a try if you haven't been doing so already. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zack.galbreath at kitware.com Wed Aug 10 09:52:51 2016 From: zack.galbreath at kitware.com (Zack Galbreath) Date: Wed, 10 Aug 2016 09:52:51 -0400 Subject: [CDash] Optimizing build2test table In-Reply-To: References: <856010006cd14dd5be99779cc2d04dce@azdexchstore3.AZD.LOCAL> Message-ID: On Tue, Aug 9, 2016 at 8:38 AM, Mat?j? Miroslav, Ing. < Mateju.Miroslav at azd.cz> wrote: > Hi, > > > > I have finally found the bottleneck of my CDash server. It is related to > the build2test table as I expected. The performance limitation is caused > by a suboptimal index. > > > > Most types of queries to build2test table that can be found in CDash > sources do filter the rows using WHERE buildid=? AND testid=? Such > queries cannot be supported by existing separate indexes buildid and > testid but a multi-column index buildid+testid will cover them. (See > https://www.percona.com/blog/2009/09/19/multi-column- > indexes-vs-index-merge/) > > > > So I changed the buildid index to include also testid column on my > testing CDash installation. The index size of build2test table rose by > approx. 1 GiB to 5.6 GiB (while data size is 3.3 GiB). From that point, > there are much less slow_queries > and any > CDash submissions are handled much faster: The real time > of my CTest run (with several > builds and thousands of tests) using synchronous submission decreased from > 152 min to 69 min and no submission exceeds the processing timeout (unlike > before). The processing delay of an asynchronous submission series slightly > decreased (max. 0.53 hours, compared to more than one hour). > Great, thanks for investigating this. I agree that a multi-column index makes more sense for the build2test table. Should I create a patch or is anyone more experienced in the CDash source > able to take care of it? > Feel free to give this a shot yourself. You'll need to modify the CREATE TABLE statement for build2test in both copies of cdash.sql (sql/mysql/ and sql/pgsql/). Also add a call to AddTableIndex() somewhere around here . I'm happy to help you out with this if you run into any problems. > I also have a few side notes which arose when I was examining the > problem above: > > > > There is no tag corresponding to the current release 2.2.3 in the official > Git repo. I tried *master* branch but it looks very different from the > 2.2.x releases. > CDash still follows an old-school development style where even numbers represent stable releases and odd numbers represent development releases. So CDash 2.3 is a work-in-progress that will culminate with the release of CDash 2.4. The file monitor.php is causing most of the remaining slow queries. The > function echo_project_data_sizes() contains pretty complicated queries > and makes monitor.php practically useless (i.e. unresponsive) or even > harmful when used on a busy DB: It?s slowing down even more and possibly > causes exceeding of the processing timeout. This problem disappears when I > comment out the call to echo_project_data_sizes(). This function should > have its own page and possibly a warning message that it may take long to > load and reduce performance. > Good call, this page could definitely use some attention. Relocating or removing echo_project_data_sizes() sounds like a good first step. Thanks again for helping to improve CDash! -------------- next part -------------- An HTML attachment was scrubbed... URL: From zack.galbreath at kitware.com Wed Aug 10 10:12:02 2016 From: zack.galbreath at kitware.com (Zack Galbreath) Date: Wed, 10 Aug 2016 10:12:02 -0400 Subject: [CDash] Optimizing build2test table In-Reply-To: References: <856010006cd14dd5be99779cc2d04dce@azdexchstore3.AZD.LOCAL> Message-ID: On Wed, Aug 10, 2016 at 9:52 AM, Zack Galbreath wrote: > On Tue, Aug 9, 2016 at 8:38 AM, Mat?j? Miroslav, Ing. < > Mateju.Miroslav at azd.cz> wrote: > >> Should I create a patch or is anyone more experienced in the CDash >> source able to take care of it? >> > > Feel free to give this a shot yourself. > > You'll need to modify the CREATE TABLE statement for build2test in both > copies of cdash.sql (sql/mysql/ and sql/pgsql/). > > Also add a call to AddTableIndex() somewhere around here > . > Here's a better example of how to create a new multi-column index in upgrade.php: https://github.com/Kitware/CDash/blob/master/public/upgrade.php#L660-L663 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Mateju.Miroslav at azd.cz Tue Aug 9 12:38:52 2016 From: Mateju.Miroslav at azd.cz (=?utf-8?B?TWF0xJtqxa8gTWlyb3NsYXYsIEluZy4=?=) Date: Tue, 9 Aug 2016 12:38:52 +0000 Subject: [CDash] Optimizing build2test table In-Reply-To: References: <856010006cd14dd5be99779cc2d04dce@azdexchstore3.AZD.LOCAL> Message-ID: Hi, I have finally found the bottleneck of my CDash server. It is related to the build2test table as I expected. The performance limitation is caused by a suboptimal index. Most types of queries to build2test table that can be found in CDash sources do filter the rows using WHERE buildid=? AND testid=? Such queries cannot be supported by existing separate indexes buildid and testid but a multi-column index buildid+testid will cover them. (See https://www.percona.com/blog/2009/09/19/multi-column-indexes-vs-index-merge/) So I changed the buildid index to include also testid column on my testing CDash installation. The index size of build2test table rose by approx. 1 GiB to 5.6 GiB (while data size is 3.3 GiB). From that point, there are much less slow_queries and any CDash submissions are handled much faster: The real time of my CTest run (with several builds and thousands of tests) using synchronous submission decreased from 152 min to 69 min and no submission exceeds the processing timeout (unlike before). The processing delay of an asynchronous submission series slightly decreased (max. 0.53 hours, compared to more than one hour). Should I create a patch or is anyone more experienced in the CDash source able to take care of it? I also have a few side notes which arose when I was examining the problem above: There is no tag corresponding to the current release 2.2.3 in the official Git repo. I tried master branch but it looks very different from the 2.2.x releases. The file monitor.php is causing most of the remaining slow queries. The function echo_project_data_sizes() contains pretty complicated queries and makes monitor.php practically useless (i.e. unresponsive) or even harmful when used on a busy DB: It?s slowing down even more and possibly causes exceeding of the processing timeout. This problem disappears when I comment out the call to echo_project_data_sizes(). This function should have its own page and possibly a warning message that it may take long to load and reduce performance. Best regards, Miroslav Ing. Miroslav Mat?j? Programmer Analyst A?D Praha s.r.o. Technology Division Research and Development ?irovnick? 2/3146, 106 17 Prague Czech Republic Phone: +420 267 287 476 Web: www.azd.cz From: Zack Galbreath [mailto:zack.galbreath at kitware.com] Sent: Thursday, May 19, 2016 4:33 PM To: Mat?j? Miroslav, Ing. Cc: cdash at public.kitware.com Subject: Re: [CDash] Duplicate rows in build2test On Thu, May 19, 2016 at 9:40 AM, Mat?j? Miroslav, Ing. > wrote: Hi, I am sending a few updates regarding the build2test problem. I copied my DB to a virtual machine and performed a few experiments to reduce the size of build2test table. I was experimenting with an older backup version with the size of 9.3 GiB. (The DB on my production server has 17.5 GiB currently while the single table build2test has 11.7 GiB.) I tried to remove the duplicates using the method from https://stackoverflow.com/a/3312066/711006. First of all, I had to convert the table to MyISAM because of an InnoDB bug (https://stackoverflow.com/a/8053812/711006). The MyISAM version has about one half of the size of InnoDB version. There were 1.9 million duplicate rows out of 67.6 M and the data size decreased slightly from 1.3 GiB to 1.2 GiB while creation of the new unique index caused the index grow from 2.3 GiB to 3.2 GiB. I have not tried to convert the table back to InnoDB yet but I would expect similar inflation of index. So I would not recommend to create the unique index anymore, at least until I check it with InnoDB. I also found that the column status of type varchar(10) contains just three values. Even the command SELECT status FROM build2test PROCEDURE ANALYSE()\G recommended to change the type to ENUM('failed','notrun','passed') NOT NULL I tried it and the table size decreased from 9.3 GiB to 8.5 GiB reducing the size of both data and index, at least for InnoDB. However, the MyISAM version grew slightly (from 4.4 GiB to 5 GiB). I am going to try more optimization methods if time allows. Thanks for digging into this. I certainly appreciate it. Often times I find that you need to run OPTIMIZE TABLE to reclaim disk space after making big changes to the database. Consider giving that a try if you haven't been doing so already. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zack.galbreath at kitware.com Wed Aug 10 13:52:51 2016 From: zack.galbreath at kitware.com (Zack Galbreath) Date: Wed, 10 Aug 2016 09:52:51 -0400 Subject: [CDash] Optimizing build2test table In-Reply-To: References: <856010006cd14dd5be99779cc2d04dce@azdexchstore3.AZD.LOCAL> Message-ID: On Tue, Aug 9, 2016 at 8:38 AM, Mat?j? Miroslav, Ing. < Mateju.Miroslav at azd.cz> wrote: > Hi, > > > > I have finally found the bottleneck of my CDash server. It is related to > the build2test table as I expected. The performance limitation is caused > by a suboptimal index. > > > > Most types of queries to build2test table that can be found in CDash > sources do filter the rows using WHERE buildid=? AND testid=? Such > queries cannot be supported by existing separate indexes buildid and > testid but a multi-column index buildid+testid will cover them. (See > https://www.percona.com/blog/2009/09/19/multi-column- > indexes-vs-index-merge/) > > > > So I changed the buildid index to include also testid column on my > testing CDash installation. The index size of build2test table rose by > approx. 1 GiB to 5.6 GiB (while data size is 3.3 GiB). From that point, > there are much less slow_queries > and any > CDash submissions are handled much faster: The real time > of my CTest run (with several > builds and thousands of tests) using synchronous submission decreased from > 152 min to 69 min and no submission exceeds the processing timeout (unlike > before). The processing delay of an asynchronous submission series slightly > decreased (max. 0.53 hours, compared to more than one hour). > Great, thanks for investigating this. I agree that a multi-column index makes more sense for the build2test table. Should I create a patch or is anyone more experienced in the CDash source > able to take care of it? > Feel free to give this a shot yourself. You'll need to modify the CREATE TABLE statement for build2test in both copies of cdash.sql (sql/mysql/ and sql/pgsql/). Also add a call to AddTableIndex() somewhere around here . I'm happy to help you out with this if you run into any problems. > I also have a few side notes which arose when I was examining the > problem above: > > > > There is no tag corresponding to the current release 2.2.3 in the official > Git repo. I tried *master* branch but it looks very different from the > 2.2.x releases. > CDash still follows an old-school development style where even numbers represent stable releases and odd numbers represent development releases. So CDash 2.3 is a work-in-progress that will culminate with the release of CDash 2.4. The file monitor.php is causing most of the remaining slow queries. The > function echo_project_data_sizes() contains pretty complicated queries > and makes monitor.php practically useless (i.e. unresponsive) or even > harmful when used on a busy DB: It?s slowing down even more and possibly > causes exceeding of the processing timeout. This problem disappears when I > comment out the call to echo_project_data_sizes(). This function should > have its own page and possibly a warning message that it may take long to > load and reduce performance. > Good call, this page could definitely use some attention. Relocating or removing echo_project_data_sizes() sounds like a good first step. Thanks again for helping to improve CDash! -------------- next part -------------- An HTML attachment was scrubbed... URL: From zack.galbreath at kitware.com Wed Aug 10 14:12:02 2016 From: zack.galbreath at kitware.com (Zack Galbreath) Date: Wed, 10 Aug 2016 10:12:02 -0400 Subject: [CDash] Optimizing build2test table In-Reply-To: References: <856010006cd14dd5be99779cc2d04dce@azdexchstore3.AZD.LOCAL> Message-ID: On Wed, Aug 10, 2016 at 9:52 AM, Zack Galbreath wrote: > On Tue, Aug 9, 2016 at 8:38 AM, Mat?j? Miroslav, Ing. < > Mateju.Miroslav at azd.cz> wrote: > >> Should I create a patch or is anyone more experienced in the CDash >> source able to take care of it? >> > > Feel free to give this a shot yourself. > > You'll need to modify the CREATE TABLE statement for build2test in both > copies of cdash.sql (sql/mysql/ and sql/pgsql/). > > Also add a call to AddTableIndex() somewhere around here > . > Here's a better example of how to create a new multi-column index in upgrade.php: https://github.com/Kitware/CDash/blob/master/public/upgrade.php#L660-L663 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Mateju.Miroslav at azd.cz Tue Aug 9 12:38:52 2016 From: Mateju.Miroslav at azd.cz (=?utf-8?B?TWF0xJtqxa8gTWlyb3NsYXYsIEluZy4=?=) Date: Tue, 9 Aug 2016 12:38:52 +0000 Subject: [CDash] Optimizing build2test table In-Reply-To: References: <856010006cd14dd5be99779cc2d04dce@azdexchstore3.AZD.LOCAL> Message-ID: Hi, I have finally found the bottleneck of my CDash server. It is related to the build2test table as I expected. The performance limitation is caused by a suboptimal index. Most types of queries to build2test table that can be found in CDash sources do filter the rows using WHERE buildid=? AND testid=? Such queries cannot be supported by existing separate indexes buildid and testid but a multi-column index buildid+testid will cover them. (See https://www.percona.com/blog/2009/09/19/multi-column-indexes-vs-index-merge/) So I changed the buildid index to include also testid column on my testing CDash installation. The index size of build2test table rose by approx. 1 GiB to 5.6 GiB (while data size is 3.3 GiB). From that point, there are much less slow_queries and any CDash submissions are handled much faster: The real time of my CTest run (with several builds and thousands of tests) using synchronous submission decreased from 152 min to 69 min and no submission exceeds the processing timeout (unlike before). The processing delay of an asynchronous submission series slightly decreased (max. 0.53 hours, compared to more than one hour). Should I create a patch or is anyone more experienced in the CDash source able to take care of it? I also have a few side notes which arose when I was examining the problem above: There is no tag corresponding to the current release 2.2.3 in the official Git repo. I tried master branch but it looks very different from the 2.2.x releases. The file monitor.php is causing most of the remaining slow queries. The function echo_project_data_sizes() contains pretty complicated queries and makes monitor.php practically useless (i.e. unresponsive) or even harmful when used on a busy DB: It?s slowing down even more and possibly causes exceeding of the processing timeout. This problem disappears when I comment out the call to echo_project_data_sizes(). This function should have its own page and possibly a warning message that it may take long to load and reduce performance. Best regards, Miroslav Ing. Miroslav Mat?j? Programmer Analyst A?D Praha s.r.o. Technology Division Research and Development ?irovnick? 2/3146, 106 17 Prague Czech Republic Phone: +420 267 287 476 Web: www.azd.cz From: Zack Galbreath [mailto:zack.galbreath at kitware.com] Sent: Thursday, May 19, 2016 4:33 PM To: Mat?j? Miroslav, Ing. Cc: cdash at public.kitware.com Subject: Re: [CDash] Duplicate rows in build2test On Thu, May 19, 2016 at 9:40 AM, Mat?j? Miroslav, Ing. > wrote: Hi, I am sending a few updates regarding the build2test problem. I copied my DB to a virtual machine and performed a few experiments to reduce the size of build2test table. I was experimenting with an older backup version with the size of 9.3 GiB. (The DB on my production server has 17.5 GiB currently while the single table build2test has 11.7 GiB.) I tried to remove the duplicates using the method from https://stackoverflow.com/a/3312066/711006. First of all, I had to convert the table to MyISAM because of an InnoDB bug (https://stackoverflow.com/a/8053812/711006). The MyISAM version has about one half of the size of InnoDB version. There were 1.9 million duplicate rows out of 67.6 M and the data size decreased slightly from 1.3 GiB to 1.2 GiB while creation of the new unique index caused the index grow from 2.3 GiB to 3.2 GiB. I have not tried to convert the table back to InnoDB yet but I would expect similar inflation of index. So I would not recommend to create the unique index anymore, at least until I check it with InnoDB. I also found that the column status of type varchar(10) contains just three values. Even the command SELECT status FROM build2test PROCEDURE ANALYSE()\G recommended to change the type to ENUM('failed','notrun','passed') NOT NULL I tried it and the table size decreased from 9.3 GiB to 8.5 GiB reducing the size of both data and index, at least for InnoDB. However, the MyISAM version grew slightly (from 4.4 GiB to 5 GiB). I am going to try more optimization methods if time allows. Thanks for digging into this. I certainly appreciate it. Often times I find that you need to run OPTIMIZE TABLE to reclaim disk space after making big changes to the database. Consider giving that a try if you haven't been doing so already. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zack.galbreath at kitware.com Wed Aug 10 13:52:51 2016 From: zack.galbreath at kitware.com (Zack Galbreath) Date: Wed, 10 Aug 2016 09:52:51 -0400 Subject: [CDash] Optimizing build2test table In-Reply-To: References: <856010006cd14dd5be99779cc2d04dce@azdexchstore3.AZD.LOCAL> Message-ID: On Tue, Aug 9, 2016 at 8:38 AM, Mat?j? Miroslav, Ing. < Mateju.Miroslav at azd.cz> wrote: > Hi, > > > > I have finally found the bottleneck of my CDash server. It is related to > the build2test table as I expected. The performance limitation is caused > by a suboptimal index. > > > > Most types of queries to build2test table that can be found in CDash > sources do filter the rows using WHERE buildid=? AND testid=? Such > queries cannot be supported by existing separate indexes buildid and > testid but a multi-column index buildid+testid will cover them. (See > https://www.percona.com/blog/2009/09/19/multi-column- > indexes-vs-index-merge/) > > > > So I changed the buildid index to include also testid column on my > testing CDash installation. The index size of build2test table rose by > approx. 1 GiB to 5.6 GiB (while data size is 3.3 GiB). From that point, > there are much less slow_queries > and any > CDash submissions are handled much faster: The real time > of my CTest run (with several > builds and thousands of tests) using synchronous submission decreased from > 152 min to 69 min and no submission exceeds the processing timeout (unlike > before). The processing delay of an asynchronous submission series slightly > decreased (max. 0.53 hours, compared to more than one hour). > Great, thanks for investigating this. I agree that a multi-column index makes more sense for the build2test table. Should I create a patch or is anyone more experienced in the CDash source > able to take care of it? > Feel free to give this a shot yourself. You'll need to modify the CREATE TABLE statement for build2test in both copies of cdash.sql (sql/mysql/ and sql/pgsql/). Also add a call to AddTableIndex() somewhere around here . I'm happy to help you out with this if you run into any problems. > I also have a few side notes which arose when I was examining the > problem above: > > > > There is no tag corresponding to the current release 2.2.3 in the official > Git repo. I tried *master* branch but it looks very different from the > 2.2.x releases. > CDash still follows an old-school development style where even numbers represent stable releases and odd numbers represent development releases. So CDash 2.3 is a work-in-progress that will culminate with the release of CDash 2.4. The file monitor.php is causing most of the remaining slow queries. The > function echo_project_data_sizes() contains pretty complicated queries > and makes monitor.php practically useless (i.e. unresponsive) or even > harmful when used on a busy DB: It?s slowing down even more and possibly > causes exceeding of the processing timeout. This problem disappears when I > comment out the call to echo_project_data_sizes(). This function should > have its own page and possibly a warning message that it may take long to > load and reduce performance. > Good call, this page could definitely use some attention. Relocating or removing echo_project_data_sizes() sounds like a good first step. Thanks again for helping to improve CDash! -------------- next part -------------- An HTML attachment was scrubbed... URL: From zack.galbreath at kitware.com Wed Aug 10 14:12:02 2016 From: zack.galbreath at kitware.com (Zack Galbreath) Date: Wed, 10 Aug 2016 10:12:02 -0400 Subject: [CDash] Optimizing build2test table In-Reply-To: References: <856010006cd14dd5be99779cc2d04dce@azdexchstore3.AZD.LOCAL> Message-ID: On Wed, Aug 10, 2016 at 9:52 AM, Zack Galbreath wrote: > On Tue, Aug 9, 2016 at 8:38 AM, Mat?j? Miroslav, Ing. < > Mateju.Miroslav at azd.cz> wrote: > >> Should I create a patch or is anyone more experienced in the CDash >> source able to take care of it? >> > > Feel free to give this a shot yourself. > > You'll need to modify the CREATE TABLE statement for build2test in both > copies of cdash.sql (sql/mysql/ and sql/pgsql/). > > Also add a call to AddTableIndex() somewhere around here > . > Here's a better example of how to create a new multi-column index in upgrade.php: https://github.com/Kitware/CDash/blob/master/public/upgrade.php#L660-L663 -------------- next part -------------- An HTML attachment was scrubbed... URL: