{"id":6889,"date":"2014-05-08T16:33:47","date_gmt":"2014-05-08T16:33:47","guid":{"rendered":"https:\/\/unknownerror.org\/index.php\/2014\/05\/08\/problem-about-database-performance-collection-of-common-programming-errors\/"},"modified":"2014-05-08T16:33:47","modified_gmt":"2014-05-08T16:33:47","slug":"problem-about-database-performance-collection-of-common-programming-errors","status":"publish","type":"post","link":"https:\/\/unknownerror.org\/index.php\/2014\/05\/08\/problem-about-database-performance-collection-of-common-programming-errors\/","title":{"rendered":"problem about database-performance-Collection of common programming errors"},"content":{"rendered":"<ul>\n<li><img decoding=\"async\" src=\"http:\/\/www.gravatar.com\/avatar\/8671bf5ef615345a04355532d98c1c42?s=32&amp;d=identicon&amp;r=PG\" \/><br \/>\nBrainCore<br \/>\nmysql innodb myisam database-performance<br \/>\nAfter noticing that our database has become a major bottleneck on our live production systems, I decided to construct a simple benchmark to get to the bottom of the issue.The benchmark: I time how long it takes to increment the same row in an InnoDB table 3000 times, where the row is indexed by its primary key, and the column being updated is not part of any index. I perform these 3000 updates using 20 concurrent clients running on a remote machine, each with its own separate connection to the D<\/li>\n<li><img decoding=\"async\" src=\"http:\/\/www.gravatar.com\/avatar\/15e429aa03b3fb947a1bc2389a1d0df4?s=32&amp;d=identicon&amp;r=PG\" \/><br \/>\nsushil bharwani<br \/>\nperformance joomla1.5 database-performance<br \/>\nWe have build a Intranet on Joomla, its used by more than 20,000 users. At times there are 200 or more concurrent users on the site. And site starts working slow and sometimes crashes. What should we be looking at. Are there known joomla performance issues that can be handled. We doubt that there are some database queries that makes it slow. Any suggestions in this direction will be helpful<\/li>\n<li><img decoding=\"async\" src=\"http:\/\/www.gravatar.com\/avatar\/71770d043c0f7e3c7bc5f74190015c26?s=32&amp;d=identicon&amp;r=PG\" \/><br \/>\nGreg<br \/>\nmysql scaling database-performance<br \/>\nI know this is a horrible, generalized question with no good answer, and I apologize ahead of time, but I wonder if someone could take a stab at a very broad estimate. Let&#8217;s say you have a dedicated MySQL server running on about $1K worth of modern hardware.Let&#8217;s say the average user makes 20 read requests and 5 write requests per minute &#8212; all straightforward queries, no joins; mostly along the lines of &#8216;select this row by UUID&#8217; out of an indexed table of ~10,000,000 rows.Very, very, very, very<\/li>\n<li><img decoding=\"async\" src=\"http:\/\/www.gravatar.com\/avatar\/51fc7f0996643c5a88574b1109693bde?s=32&amp;d=identicon&amp;r=PG\" \/><br \/>\nAndrew Fashion<br \/>\ncentos5 performance-tuning dedicated-server database-performance<br \/>\nI have a quad core xeon, 8GB ram, and only 2 x 1TB SATA drives right now. I am trying to figure out how to scale my application to handle as much load and traffic per day as possible. I have an industry network site and it&#8217;s currently bogging around 30,000 signup&#8217;s in a day, 60k UV, and 600-1,000,000 pageviews per day. It was going so slow I had to shut the server off, and now I am losing money and traffic.I&#8217;m pretty certain part of it is the PHP code and MySQL queries (hired them from India). W<\/li>\n<li><img decoding=\"async\" src=\"http:\/\/www.gravatar.com\/avatar\/20c3e31127962f3bf7205a8b4b258fb3?s=32&amp;d=identicon&amp;r=PG\" \/><br \/>\nEvgeny<br \/>\npostgresql database-administration database-performance<br \/>\nI have a couple of PostgreSQL tables (9.1) which are inserted to \/ deleted from often. Over time they suffer from index bloating even though the autovacuum is configured and runs regularly.I&#8217;m thinking about automating the REINDEX on these tables. There will be no one to physically access the database as the software will be installed at the client site and is literally supposed to run for years.I keep reading about &#8220;cron jobs&#8221; but I&#8217;m missing some guide or tutorial on how to best set it up, spe<\/li>\n<li><img decoding=\"async\" src=\"http:\/\/www.gravatar.com\/avatar\/a8f1d0a4d8f82dd6121eba142f5f8ee3?s=32&amp;d=identicon&amp;r=PG\" \/><br \/>\nRytis<br \/>\ndatabase corruption database-performance firebird<br \/>\nI am running several different Firebird versions (2.0, 2.1) on multiple entry level Windows-based servers with wildly varying hardware. The only matching thing between them is that they are running same home built application with the same database structure.Lately I&#8217;ve been seeing massive slowdowns on multiple servers. Turns out that database gets corrupted, so each time it breaks, I get to mend, backup and restore the database, and it all is fine for some time (1-2 weeks), and then it repeats<\/li>\n<li><img decoding=\"async\" src=\"http:\/\/i.stack.imgur.com\/WBMJi.png?s=32&amp;g=1\" \/><br \/>\ngravyface<br \/>\ndatabase-performance<br \/>\nI would like to know if i have a web with a huge Database and throw expensive (in time)reports , the best way to do this is with one database for the web and a replicated one for reports, or only one for both, i&#8217;m worried that users can throw reports for 5 or more years because they need that information and the web crashes because of this.<\/li>\n<li><img decoding=\"async\" src=\"http:\/\/www.gravatar.com\/avatar\/8ab9fc370300390405c44c7bad071604?s=32&amp;d=identicon&amp;r=PG\" \/><br \/>\nuser422543<br \/>\nmysql crash mysqldump mysql-error-1064 database-performance<br \/>\nwe are using Rails and mysql linux stack(mongrels as application server) for our application,unexpectedly some of the modules stopped working. when we investigated the issue mongrels were struck(both CPU and memory usage is normal),then we tried to login to database,we&#8217;re able to login but unable to select the database(i means use db_name it is not showing any error but it just hangs.) interestingly we are able to select the stage db and run query on stage database which is in same mysql instan<\/li>\n<li><img decoding=\"async\" src=\"http:\/\/www.gravatar.com\/avatar\/a7d39be9e4dc439ccf657ffce1feb4bf?s=32&amp;d=identicon&amp;r=PG\" \/><br \/>\nProfessor Frink<br \/>\nmysql database performance-tuning database-performance mysql5<br \/>\nI currently have innodb_buffer_size set to 2GB &#8211; yet I have well over 5GB of innodb databases, and another 4 gigs of free ram on the server (Centos 5). I tried to increase the value to 3 gig, but MySQL refuses to launch after I make the change. I tried lowering it just for kicks, and mysql seems to load fine.Any ideas as to why this is happening?The MySQL error log shows the following:110615 08:17:37 mysqld started 110615 8:17:37 [Warning] option &#8216;max_join_size&#8217;: unsigned value 184467440737095<\/li>\n<\/ul>\n<p>Web site is in building<\/p>\n<p>I discovery a place to host code\u3001demo\u3001 blog and websites.<br \/>\nSite access is fast but not money<\/p>\n<p><img decoding=\"async\" src=\"http:\/\/www.m5zn.com\/newuploads\/2014\/01\/30\/jpg\/e7da807964b1fff.jpg\" \/><\/p>\n","protected":false},"excerpt":{"rendered":"<p>BrainCore mysql innodb myisam database-performance After noticing that our database has become a major bottleneck on our live production systems, I decided to construct a simple benchmark to get to the bottom of the issue.The benchmark: I time how long it takes to increment the same row in an InnoDB table 3000 times, where the [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-6889","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/unknownerror.org\/index.php\/wp-json\/wp\/v2\/posts\/6889","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/unknownerror.org\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/unknownerror.org\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/unknownerror.org\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/unknownerror.org\/index.php\/wp-json\/wp\/v2\/comments?post=6889"}],"version-history":[{"count":0,"href":"https:\/\/unknownerror.org\/index.php\/wp-json\/wp\/v2\/posts\/6889\/revisions"}],"wp:attachment":[{"href":"https:\/\/unknownerror.org\/index.php\/wp-json\/wp\/v2\/media?parent=6889"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/unknownerror.org\/index.php\/wp-json\/wp\/v2\/categories?post=6889"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/unknownerror.org\/index.php\/wp-json\/wp\/v2\/tags?post=6889"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}