在你回答这个问题之前,我从来没有开发过任何流行到足以达到高服务器负载的东西。请把我当作(唉)一个刚刚登陆地球的外星人,尽管我知道PHP和一些优化技术。


我正在开发一个PHP工具,可以获得相当多的用户,如果它是正确的。然而,虽然我完全有能力开发程序,但当涉及到制作可以处理巨大流量的东西时,我几乎一无所知。所以这里有一些关于它的问题(也可以把这个问题变成一个资源线程)。

数据库

At the moment I plan to use the MySQLi features in PHP5. However how should I setup the databases in relation to users and content? Do I actually need multiple databases? At the moment everything's jumbled into one database - although I've been considering spreading user data to one, actual content to another and finally core site content (template masters etc.) to another. My reasoning behind this is that sending queries to different databases will ease up the load on them as one database = 3 load sources. Also would this still be effective if they were all on the same server?

缓存

我有一个用于构建页面和交换变量的模板系统。主模板存储在数据库中,每当一个模板被调用时,它的缓存副本(html文档)就会被调用。目前,我在这些模板中有两种类型的变量-静态变量和动态变量。静态变量通常是像页面名称,网站的名称-不经常改变的东西;动态变量是在每次页面加载时改变的东西。

我的问题是:

比如说我对不同的文章有评论。这是一个更好的解决方案:存储简单的注释模板,并在每次页面加载时呈现注释(来自DB调用),或者将注释页面的缓存副本存储为html页面——每次添加/编辑/删除注释时,页面都会被重新检索。

最后

有人有任何提示/指针运行一个高负载的PHP网站。我很确定这是一种可行的语言——Facebook和Yahoo!优先考虑——但有什么经验是我应该注意的吗?


当前回答

我不认为自己会很快从MySQL转换过来——所以我想我不需要PDO的抽象功能。DavidM,谢谢你的文章,它们帮了我很多。

其他回答

使用Xdebug(推荐使用tj9991)之类的工具对应用程序进行性能分析绝对是必须的。盲目地进行优化是没有意义的。Xdebug将帮助您找到代码中真正的瓶颈,这样您就可以明智地花费优化时间,并修复实际上导致速度变慢的代码块。

如果您正在使用Apache,另一个可以帮助测试的实用程序是Siege。它将帮助您预测服务器和应用程序对高负载的反应,从而真正实现它的运行速度。

任何类型的PHP操作码缓存(如APC或其他)也会有很大帮助。

我在一些网站上工作过,这些网站都是由PHP和MySQL支持的,每个月都有数百万的点击率。以下是一些基本知识:

Cache, cache, cache. Caching is one of the simplest and most effective ways to reduce load on your webserver and database. Cache page content, queries, expensive computation, anything that is I/O bound. Memcache is dead simple and effective. Use multiple servers once you are maxed out. You can have multiple web servers and multiple database servers (with replication). Reduce overall # of request to your webservers. This entails caching JS, CSS and images using expires headers. You can also move your static content to a CDN, which will speed up your user's experience. Measure & benchmark. Run Nagios on your production machines and load test on your dev/qa server. You need to know when your server will catch on fire so you can prevent it.

我推荐阅读《构建可扩展的网站》,它是由Flickr的一位工程师写的,是一个很好的参考。

看看我关于可伸缩性的博客文章,它有很多关于多种语言和平台可伸缩性的演示文稿的链接: http://www.ryandoherty.net/2008/07/13/unicorns-and-scalability/

@Gary

不要使用MySQLi——PDO是“现代的”OO数据库访问层。最重要的功能是在查询中使用占位符。使用服务器端准备和其他优化也足够聪明。

我现在正在看PDO,看起来你是对的-但是我知道MySQL正在为PHP开发MySQLd扩展-我认为是为了成功MySQL或MySQLi -你怎么看?


@Ryan, Eric, tj9991

谢谢你关于PHP缓存扩展的建议——你能解释一下为什么要使用一个而不是另一个吗?我听说过通过IRC的memcached很棒,但从来没有听说过APC -你对它们有什么看法?我认为使用多个缓存系统会适得其反。

我肯定会挑选一些测试人员,非常感谢你的建议。

APC是绝对必须的。它不仅是一个伟大的缓存系统,而且从自动缓存的PHP文件中获得的好处是天赐良机。至于多数据库的想法,我认为在同一台服务器上使用不同的数据库不会有什么好处。它可能会在查询时提高一些速度,但我怀疑为确保三者同步而部署和维护代码所付出的努力是否值得。

我还强烈建议运行Xdebug来查找程序中的瓶颈。它使优化对我来说轻而易举。

没有两个站点是相同的。您确实需要使用像jmeter和benchmark这样的工具来查看问题点在哪里。您可以花费大量的时间来猜测和改进,但是在您度量和比较您的更改之前,您不会看到真正的结果。

例如,多年来,MySQL查询缓存是我们所有性能问题的解决方案。如果你的站点很慢,MySQL专家建议打开查询缓存。事实证明,如果你有一个高的写负载,缓存实际上是瘫痪的。如果你不经过测试就打开它,你永远不会知道。

别忘了,缩放永远不会结束。处理10req/s的站点将需要更改以支持1000req/s。如果您足够幸运,需要支持10,000req/s,那么您的体系结构可能也会完全不同。

数据库

Don't use MySQLi -- PDO is the 'modern' OO database access layer. The most important feature to use is placeholders in your queries. It's smart enough to use server side prepares and other optimizations for you as well. You probably don't want to break your database up at this point. If you do find that one database isn't cutting, there are several techniques to scale up, depending on your app. Replicating to additional servers typically works well if you have more reads than writes. Sharding is a technique to split your data over many machines.

缓存

You probably don't want to cache in your database. The database is typically your bottleneck, so adding more IO's to it is typically a bad thing. There are several PHP caches out there that accomplish similar things like APC and Zend. Measure your system with caching on and off. I bet your cache is heavier than serving the pages straight. If it takes a long time to build your comments and article data from the db, integrate memcache into your system. You can cache the query results and store them in a memcached instance. It's important to remember that retrieving the data from memcache must be faster than assembling it from the database to see any benefit. If your articles aren't dynamic, or you have simple dynamic changes after it's generated, consider writing out html or php to the disk. You could have an index.php page that looks on disk for the article, if it's there, it streams it to the client. If it isn't, it generates the article, writes it to the disk and sends it to the client. Deleting files from the disk would cause pages to be re-written. If a comment is added to an article, delete the cached copy -- it would be regenerated.