91年出生属什么生肖| 388是什么意思| 姐姐的女儿应该叫什么| 业已毕业是什么意思| 什么叫便秘| 缓解是什么意思| 什么食物补锌| em什么意思| 儿童肚子痛吃什么药| 梦见被狗追是什么意思| 什么是小男人| 子女缘薄是什么意思| 血糖仪h1是什么意思| 反流性食管炎吃什么中成药最好| 怀孕什么不能吃| 圆脸适合什么发型短发| 霍霍是什么意思| 什么时候中秋节| 大姨妈是什么| 鲁迅原名是什么| 什么叫大男子主义| 二十不惑什么意思| 手指缝里长水泡还痒是什么原因| 双卵巢是什么意思| 1990年什么生肖| py什么意思| 囊肿是什么意思| 繁衍的衍是什么意思| 三月有什么节日| 府绸是什么面料| 外阴瘙痒用什么洗| 谋杀是什么意思| 入木三分是什么意思| 检查抑郁症挂什么科| 脂蛋白是什么意思| 7月6日是什么节日| 女娲补天是什么生肖| 阀值是什么意思| 胸部彩超能检查出什么| 闭合性骨折是什么意思| 闭口是什么样子图片| 痄腮是什么意思| 登革热是什么症状| 女人眼睛干涩吃什么药| 血瘀是什么原因造成的| 驻京办是干什么的| 女人在什么时候最想男人| 血糖能吃什么水果| 软骨瘤是什么病| 恐龙为什么灭绝| 偏财代表什么| 纠结是什么意思| 黄瓜敷脸有什么功效与作用| wpw综合症是什么意思| 稍高回声是什么意思| 感恩节是什么时候| 头发软化和拉直有什么区别| 腿血栓什么症状| 黑色碎花裙配什么上衣| 一事无成是什么生肖| clinique是什么牌子的化妆品| 闭口粉刺是什么原因引起的| 小孩不吃肉是什么原因| 庆幸是什么意思| 口舌生疮吃什么药| 减肥期间吃什么| 理疗和按摩有什么区别| 你是什么意思| 拉肚子吃什么药比较好| 白茶为什么叫白茶| 热疖痈毒是什么意思| 儿童风寒感冒吃什么药| 惊奇的什么| olp是什么意思| 蜻蜓是什么目| 菊花和金银花一起泡水有什么效果| 李逵属什么生肖| 山楂和什么不能一起吃| 书房字画写什么内容好| 牛子是什么意思| 宫颈欠光滑是什么意思| 荨麻疹长什么样图片| 胰腺低密度影什么意思| 睡不着觉是什么原因引起的| 哮喘吃什么药最好| 为什么月经不来| mc是什么意思| 窦卵泡是什么意思| 宫腔内稍高回声是什么意思| 身体容易青紫是什么原因| 压马路什么意思| chd是什么意思| 黄金变黑是什么原因| 蒸鱼豉油可以用什么代替| 名媛什么意思| 导盲犬一般是什么品种| eb病毒阳性是什么意思| 上海的市花是什么花| o是什么| 公关是什么工作| 心律不齐是什么病| 红薯是什么茎| 无花果和什么不能一起吃| 夜开花是什么菜| 灰指甲用什么药膏| 蓝脸的窦尔敦盗御马是什么歌| 梦见自己生病了是什么意思| 脉搏弱是什么原因| 人爱出汗是什么原因| 尼维达手表什么档次| 眼压高滴什么眼药水| 慢性盆腔炎吃什么药| 四五天不排便是什么原因| 气色是什么意思| 叉烧炒什么菜好吃| 清道夫吃什么| 93什么意思| 撸管是什么| 大疱性皮肤病是什么病| 血糖能吃什么水果| ssa抗体阳性说明什么| 手抽筋是什么原因| 乜贴是什么意思| 孕晚期羊水多了对宝宝有什么影响| 什么叫热射病| pr间期延长是什么意思| 十一月份出生的是什么星座| 内向的人适合做什么工作| 耐药性是什么意思| 龙龟适合什么属相人| 鹅喜欢吃什么食物| 怀孕肚子疼是什么原因| 餐边柜放什么东西| 什么是生化妊娠| 手心有痣代表什么| 梦见孩子结婚什么预兆| 尹是什么意思| 穿什么好呢| 什么是价值| 单剂量给药是什么意思| 天热出汗多是什么原因| 清华大学校长是什么级别| 湿气重是什么意思| 豆皮炒什么好吃| 手机代表什么生肖| up是什么意思| 为什么肚子总是胀胀的| 阁老是什么意思| 91年出生的属什么| 氧氟沙星和诺氟沙星有什么区别| 避讳是什么意思| 男人怕冷是什么原因| 和解少阳是什么意思| 二月初二是什么星座| 七月开什么花| 流清鼻涕是什么原因| 脉濡是什么意思| 肛门镜检查能查出什么| 叶脉是什么| 拍胸片能检查出什么| 开眼镜店需要什么设备| 烧仙草粉是什么做的| 山东为什么简称鲁| 九出十三归是什么意思| 癌抗原125是什么意思| 肠胃不好能吃什么水果| 什么姿势睡觉最好| 土豆什么时候种植| 炖鸡汤放什么材料| 低回声结节什么意思| 遇见是什么意思| 人类免疫缺陷病毒是什么| 邋遢是什么意思| 狗狗吐黄水是什么原因| 黑色碳素笔是什么| 烂嘴是什么原因| 头发油性大是什么原因| 喉痹是什么意思| 月结是什么意思| 阿胶不能和什么一起吃| 什么是膜性肾病| 乐福鞋是什么鞋| 白化病有什么危害吗| 腮腺炎的症状是什么| 喝黑豆浆有什么好处| 甲状腺五类是什么意思| 肾结石的症状是什么| 什么星座黑化最吓人| 宝宝大便酸臭味是什么原因| 嘴唇有痣代表什么| 羊排炖什么好吃| 妊娠状态是什么意思| 草木皆兵是什么生肖| 中国海警是什么编制| 肾结石检查什么项目| 什么东西在倒立之后会增加一半| 羊的守护神是什么菩萨| 最早的春联是写在什么上面的| 剑齿虎为什么会灭绝| 黄瓜片贴脸上有什么效果| 坐班是什么意思| 活泼的反义词是什么| 皮脂腺痣是什么原因引起的| 什么叫肽| 孕晚期吃什么好| 今日是什么日子| 肠胃炎适合吃什么食物| 总是抽筋是什么原因| 蚕豆是什么豆| bv是什么牌子| 8月21日是什么星座| 全身骨头疼是什么原因| 梦见老人去世预示什么| 姑爹是什么意思| 四风是什么| 残联是什么性质的单位| 发烧怕冷是什么原因| 病逝是什么意思| 高烧不退是什么病毒| 右眼皮一直跳是什么预兆| 猕猴桃什么时候成熟| 画代表什么生肖| 伪娘是什么| 意念灰是什么意思| 字母圈是什么| 桂皮是什么树的皮| hape是什么牌子| 一进大门看见什么最好| 日加华读什么| 2010是什么年| 苏字五行属什么| 胃肠湿热吃什么中成药| 碘伏是什么| 周星驰是什么星座| 梦到蛇是什么意思| 甲方是什么意思| 奔跑吧什么时候播出| 周瑜和诸葛亮是什么关系| 什么泡水喝治口臭| 左肋骨下方隐隐疼痛是什么原因| 狗的本命佛是什么佛| 龙跟什么生肖最配| 单纯性肥胖是什么意思| 吃维生素a有什么好处| 手脚冰凉是什么原因| 过是什么结构| 阳历8月份是什么星座| 三项规定内容是什么| 眼睛睁不开是什么原因| 全脂牛奶和脱脂牛奶有什么区别| 最高的山是什么山| 捡到黄金是什么预兆| 过敏性鼻炎喷什么药| 钳子什么牌子好| 阳虚是什么症状| 左手抖动是什么原因| 硝酸酯类药物有什么药| 红豆与赤小豆有什么区别| 什么叶子| diptyque属于什么档次| 什么补肝| 朝鲜和韩国什么时候分开的| 骨量偏高代表什么意思| 月光蓝是什么颜色| 百度Jump to content

手机性价比排行榜2017前十名 附2017年国产最超

From Wikipedia, the free encyclopedia
百度 脸书公司将改善信息监管提供信息安全。

Usability testing is a technique used in user-centered interaction design to evaluate a product by testing it on users. This can be seen as an irreplaceable usability practice, since it gives direct input on how real users use the system.[1] It is more concerned with the design intuitiveness of the product and tested with users who have no prior exposure to it. Such testing is paramount to the success of an end product as a fully functioning application that creates confusion amongst its users will not last for long.[2] This is in contrast with usability inspection methods where experts use different methods to evaluate a user interface without involving users.

Usability testing focuses on measuring a human-made product's capacity to meet its intended purposes. Examples of products that commonly benefit from usability testing are food, consumer products, websites or web applications, computer interfaces, documents, and devices. Usability testing measures the usability, or ease of use, of a specific object or set of objects, whereas general human–computer interaction studies attempt to formulate universal principles.

What it is not

[edit]

Simply gathering opinions on an object or a document is market research or qualitative research rather than usability testing. Usability testing usually involves systematic observation under controlled conditions to determine how well people can use the product.[3] However, often both qualitative research and usability testing are used in combination, to better understand users' motivations/perceptions, in addition to their actions.

Rather than showing users a rough draft and asking, "Do you understand this?", usability testing involves watching people trying to use something for its intended purpose. For example, when testing instructions for assembling a toy, the test subjects should be given the instructions and a box of parts and, rather than being asked to comment on the parts and materials, they should be asked to put the toy together. Instruction phrasing, illustration quality, and the toy's design all affect the assembly process.

Methods

[edit]

Setting up a usability test involves carefully creating a scenario, or a realistic situation, wherein the person performs a list of tasks using the product being tested while observers watch and take notes (dynamic verification). Several other test instruments such as scripted instructions, paper prototypes, and pre- and post-test questionnaires are also used to gather feedback on the product being tested (static verification). For example, to test the attachment function of an e-mail program, a scenario would describe a situation where a person needs to send an e-mail attachment, and asking them to undertake this task. The aim is to observe how people function in a realistic manner, so that developers can identify the problem areas and fix them. Techniques popularly used to gather data during a usability test include think aloud protocol, co-discovery learning and eye tracking.

Hallway testing

[edit]

Hallway testing, also known as guerrilla usability, is a quick and cheap method of usability testing in which people — such as those passing by in the hallway—are asked to try using the product or service. This can help designers identify "brick walls", problems so serious that users simply cannot advance, in the early stages of a new design. Anyone but project designers and engineers can be used (they tend to act as "expert reviewers" because they are too close to the project).

This type of testing is an example of convenience sampling and thus the results are potentially biased.

Remote usability testing

[edit]

In a scenario where usability evaluators, developers and prospective users are located in different countries and time zones, conducting a traditional lab usability evaluation creates challenges both from the cost and logistical perspectives. These concerns led to research on remote usability evaluation, with the user and the evaluators separated over space and time. Remote testing, which facilitates evaluations being done in the context of the user's other tasks and technology, can be either synchronous or asynchronous. The former involves real time one-on-one communication between the evaluator and the user, while the latter involves the evaluator and user working separately.[4] Numerous tools are available to address the needs of both these approaches.

Synchronous usability testing methodologies involve video conferencing or employ remote application sharing tools such as WebEx. WebEx and GoToMeeting are the most commonly used technologies to conduct a synchronous remote usability test.[5] However, synchronous remote testing may lack the immediacy and sense of "presence" desired to support a collaborative testing process. Moreover, managing interpersonal dynamics across cultural and linguistic barriers may require approaches sensitive to the cultures involved. Other disadvantages include having reduced control over the testing environment and the distractions and interruptions experienced by the participants in their native environment.[6] One of the newer methods developed for conducting a synchronous remote usability test is by using virtual worlds.[7]

Asynchronous methodologies include automatic collection of user's click streams, user logs of critical incidents that occur while interacting with the application and subjective feedback on the interface by users.[6] Similar to an in-lab study, an asynchronous remote usability test is task-based and the platform allows researchers to capture clicks and task times. Hence, for many large companies, this allows researchers to better understand visitors' intents when visiting a website or mobile site. Additionally, this style of user testing also provides an opportunity to segment feedback by demographic, attitudinal and behavioral type. The tests are carried out in the user's own environment (rather than labs) helping further simulate real-life scenario testing. This approach also provides a vehicle to easily solicit feedback from users in remote areas quickly and with lower organizational overheads. In recent years, conducting usability testing asynchronously has also become prevalent and allows testers to provide feedback in their free time and from the comfort of their own home.

Expert review

[edit]

Expert review is another general method of usability testing. As the name suggests, this method relies on bringing in experts with experience in the field (possibly from companies that specialize in usability testing) to evaluate the usability of a product.

A heuristic evaluation or usability audit is an evaluation of an interface by one or more human factors experts. Evaluators measure the usability, efficiency, and effectiveness of the interface based on usability principles, such as the 10 usability heuristics originally defined by Jakob Nielsen in 1994.[8]

Nielsen's usability heuristics, which have continued to evolve in response to user research and new devices, include:

  • Visibility of system status
  • Match between system and the real world
  • User control and freedom
  • Consistency and standards
  • Error prevention
  • Recognition rather than recall
  • Flexibility and efficiency of use
  • Aesthetic and minimalist design
  • Help users recognize, diagnose, and recover from errors
  • Help and documentation

Automated expert review

[edit]

Similar to expert reviews, automated expert reviews provide usability testing but through the use of programs given rules for good design and heuristics. Though an automated review might not provide as much detail and insight as reviews from people, they can be finished more quickly and consistently. The idea of creating surrogate users for usability testing is an ambitious direction for the artificial intelligence community.

A/B testing

[edit]

In web development and marketing, A/B testing or split testing is an experimental approach to web design (especially user experience design), which aims to identify changes to web pages that increase or maximize an outcome of interest (e.g., click-through rate for a banner advertisement). As the name implies, two versions (A and B) are compared, which are identical except for one variation that might impact a user's behavior. Version A might be the one currently used, while version B is modified in some respect. For instance, on an e-commerce website the purchase funnel is typically a good candidate for A/B testing, as even marginal improvements in drop-off rates can represent a significant gain in sales. Significant improvements can be seen through testing elements like copy text, layouts, images and colors.

Multivariate testing or bucket testing is similar to A/B testing but tests more than two versions at the same time.

Number of participants

[edit]

In the early 1990s, Jakob Nielsen, at that time a researcher at Sun Microsystems, popularized the concept of using numerous small usability tests—typically with only five participants each—at various stages of the development process. His argument is that, once it is found that two or three people are totally confused by the home page, little is gained by watching more people suffer through the same flawed design. "Elaborate usability tests are a waste of resources. The best results come from testing no more than five users and running as many small tests as you can afford."[9]

The claim of "Five users is enough" was later described by a mathematical model[10] which states for the proportion of uncovered problems U

where p is the probability of one subject identifying a specific problem and n the number of subjects (or test sessions). This model shows up as an asymptotic graph towards the number of real existing problems (see figure below).

In later research Nielsen's claim has been questioned using both empirical evidence[11] and more advanced mathematical models.[12] Two key challenges to this assertion are:

  1. Since usability is related to the specific set of users, such a small sample size is unlikely to be representative of the total population so the data from such a small sample is more likely to reflect the sample group than the population they may represent
  2. Not every usability problem is equally easy-to-detect. Intractable problems happen to decelerate the overall process. Under these circumstances, the progress of the process is much shallower than predicted by the Nielsen/Landauer formula.[13]

Nielsen does not advocate stopping after a single test with five users; his point is that testing with five users, fixing the problems they uncover, and then testing the revised site with five different users is a better use of limited resources than running a single usability test with 10 users. In practice, the tests are run once or twice per week during the entire development cycle, using three to five test subjects per round, and with the results delivered within 24 hours to the designers. The number of users actually tested over the course of the project can thus easily reach 50 to 100 people. Research shows that user testing conducted by organisations most commonly involves the recruitment of 5-10 participants.[14]

In the early stage, when users are most likely to immediately encounter problems that stop them in their tracks, almost anyone of normal intelligence can be used as a test subject. In stage two, testers will recruit test subjects across a broad spectrum of abilities. For example, in one study, experienced users showed no problem using any design, from the first to the last, while naive users and self-identified power users both failed repeatedly.[15] Later on, as the design smooths out, users should be recruited from the target population.

When the method is applied to a sufficient number of people over the course of a project, the objections raised above become addressed: The sample size ceases to be small and usability problems that arise with only occasional users are found. The value of the method lies in the fact that specific design problems, once encountered, are never seen again because they are immediately eliminated, while the parts that appear successful are tested over and over. While it's true that the initial problems in the design may be tested by only five users, when the method is properly applied, the parts of the design that worked in that initial test will go on to be tested by 50 to 100 people.

Example

[edit]

A 1982 Apple Computer manual for developers advised on usability testing:[16]

  1. "Select the target audience. Begin your human interface design by identifying your target audience. Are you writing for businesspeople or children?"
  2. Determine how much target users know about Apple computers, and the subject matter of the software.
  3. Steps 1 and 2 permit designing the user interface to suit the target audience's needs. Tax-preparation software written for accountants might assume that its users know nothing about computers but are experts on the tax code, while such software written for consumers might assume that its users know nothing about taxes but are familiar with the basics of Apple computers.

Apple advised developers, "You should begin testing as soon as possible, using drafted friends, relatives, and new employees":[16]

Our testing method is as follows. We set up a room with five to six computer systems. We schedule two to three groups of five to six users at a time to try out the systems (often without their knowing that it is the software rather than the system that we are testing). We have two of the designers in the room. Any fewer, and they miss a lot of what is going on. Any more and the users feel as though there is always someone breathing down their necks.

Designers must watch people use the program in person, because[16]

Ninety-five percent of the stumbling blocks are found by watching the body language of the users. Watch for squinting eyes, hunched shoulders, shaking heads, and deep, heart-felt sighs. When a user hits a snag, he will assume it is "on account of he is not too bright": he will not report it; he will hide it ... Do not make assumptions about why a user became confused. Ask him. You will often be surprised to learn what the user thought the program was doing at the time he got lost.

Education

[edit]

Usability testing has been a formal subject of academic instruction in different disciplines.[17] Usability testing is important to composition studies and online writing instruction (OWI).[18] Scholar Collin Bjork argues that usability testing is "necessary but insufficient for developing effective OWI, unless it is also coupled with the theories of digital rhetoric."[19]

Survey research

[edit]

Survey products include paper and digital surveys, forms, and instruments that can be completed or used by the survey respondent alone or with a data collector. Usability testing is most often done in web surveys and focuses on how people interact with survey, such as navigating the survey, entering survey responses, and finding help information. Usability testing complements traditional survey pretesting methods such as cognitive pretesting (how people understand the products), pilot testing (how will the survey procedures work), and expert review by a subject matter expert in survey methodology.[20]

In translated survey products, usability testing has shown that "cultural fitness" must be considered in the sentence and word levels and in the designs for data entry and navigation,[21] and that presenting translation and visual cues of common functionalities (tabs, hyperlinks, drop-down menus, and URLs) help to improve the user experience.[22]

See also

[edit]

References

[edit]
  1. ^ Nielsen, J. (1994). Usability Engineering, Academic Press Inc, p 165
  2. ^ Mejs, Monika (2025-08-06). "Usability Testing: the Key to Design Validation". Mood Up team - software house. Retrieved 2025-08-06.
  3. ^ Dennis G. Jerz (July 19, 2000). "Usability Testing: What Is It?". Jerz's Literacy Weblog. Retrieved June 29, 2016.
  4. ^ Andreasen, Morten Sieker; Nielsen, Henrik Villemann; Schr?der, Simon Ormholt; Stage, Jan (2007). "What happened to remote usability testing?". Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. p. 1405. doi:10.1145/1240624.1240838. ISBN 978-1-59593-593-9. S2CID 12388042.
  5. ^ Dabney Gough; Holly Phillips (2025-08-06). "Remote Online Usability Testing: Why, How, and When to Use It". Archived from the original on December 15, 2005.
  6. ^ a b Dray, Susan; Siegel, David (March 2004). "Remote possibilities?: international usability testing at a distance". Interactions. 11 (2): 10–17. doi:10.1145/971258.971264. S2CID 682010.
  7. ^ Chalil Madathil, Kapil; Greenstein, Joel S. (2011). "Synchronous remote usability testing". Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. pp. 2225–2234. doi:10.1145/1978942.1979267. ISBN 978-1-4503-0228-9. S2CID 14077658.
  8. ^ "Heuristic Evaluation". Usability First. Retrieved April 9, 2013.
  9. ^ "Usability Testing with 5 Users (Jakob Nielsen's Alertbox)". useit.com. 2025-08-06.; references Nielsen, Jakob; Landauer, Thomas K. (1993). "A mathematical model of the finding of usability problems". Proceedings of the SIGCHI conference on Human factors in computing systems. pp. 206–213. doi:10.1145/169059.169166. ISBN 978-0-89791-575-5. S2CID 207177537.
  10. ^ Virzi, R. A. (1992). "Refining the Test Phase of Usability Evaluation: How Many Subjects is Enough?". Human Factors. 34 (4): 457–468. doi:10.1177/001872089203400407. S2CID 59748299.
  11. ^ Spool, Jared; Schroeder, Will (2001). Testing web sites: five users is nowhere near enough. CHI '01 extended abstracts on Human factors in computing systems. p. 285. doi:10.1145/634067.634236. S2CID 8038786.
  12. ^ Caulton, D. A. (2001). "Relaxing the homogeneity assumption in usability testing". Behaviour & Information Technology. 20 (1): 1–7. doi:10.1080/01449290010020648. S2CID 62751921.
  13. ^ Schmettow, Martin (1 September 2008). "Heterogeneity in the Usability Evaluation Process". Electronic Workshops in Computing. doi:10.14236/ewic/HCI2008.9. {{cite journal}}: Cite journal requires |journal= (help)
  14. ^ "Results of the 2020 User Testing Industry Report". www.userfountain.com. Retrieved 2025-08-06.
  15. ^ Bruce Tognazzini. "Maximizing Windows".
  16. ^ a b c Meyers, Joe; Tognazzini, Bruce (1982). Apple IIe Design Guidelines (PDF). Apple Computer. pp. 11–13, 15.
  17. ^ Breuch, Lee-Ann M. Kastman; Zachry, Mark; Spinuzzi, Clay (April 2001). "Usability Instruction in Technical Communication Programs: New Directions in Curriculum Development". Journal of Business and Technical Communication. 15 (2): 223–240. doi:10.1177/105065190101500204. S2CID 61365767.
  18. ^ Miller-Cochran, Susan K.; Rodrigo, Rochelle L. (January 2006). "Determining effective distance learning designs through usability testing". Computers and Composition. 23 (1): 91–107. doi:10.1016/j.compcom.2005.12.002.
  19. ^ Bjork, Collin (September 2018). "Integrating Usability Testing with Digital Rhetoric in OWI". Computers and Composition. 49: 4–13. doi:10.1016/j.compcom.2018.05.009. S2CID 196160668.
  20. ^ Geisen, Emily; Bergstrom, Jennifer Romano (2017). Usability Testing for Survey Research. Cambridge: Elsevier MK Morgan Kaufmann Publishers. ISBN 978-0-12-803656-3.
  21. ^ Wang, Lin; Sha, Mandy (2025-08-06). "Cultural Fitness in the Usability of U.S. Census Internet Survey in Chinese Language". Survey Practice. 10 (3): 1–8. doi:10.29115/SP-2017-0018.
  22. ^ Sha, Mandy; Hsieh, Y. Patrick; Goerman, Patricia L. (2025-08-06). "Translation and visual cues: Towards creating a road map for limited English speakers to access translated Internet surveys in the United States". Translation & Interpreting. 10 (2): 142–158. doi:10.12807/ti.110202.2018.a10. ISSN 1836-9324.
[edit]
吃荆芥有什么好处 大姨妈来了两天就没了什么原因 石家庄有什么好玩的景点 卖腐是什么意思 群聊名字什么最好听
聚聚什么意思 吃什么补津液 为什么会长口腔溃疡 什么是焦虑 妨子痣是什么意思
三元是什么意思 1.30是什么星座 头痛去医院挂什么科 为什么一坐车就想睡觉 d表示什么
跻身是什么意思 手爱出汗是什么原因 看肝脏挂什么科 什么是高压氧 葫芦的寓意是什么
孕吐是什么时候开始hcv8jop4ns4r.cn 红眼病不能吃什么东西hcv8jop4ns7r.cn 缺铁吃什么hcv8jop6ns7r.cn SS是什么hcv9jop2ns0r.cn 鹿茸是什么hcv8jop3ns9r.cn
左侧卵巢囊肿是什么原因引起的hcv8jop1ns5r.cn 端的是什么意思wzqsfys.com 反应蛋白高是什么意思yanzhenzixun.com 着床出血是什么意思hcv9jop3ns8r.cn 懵懵懂懂是什么意思hcv9jop6ns0r.cn
湿漉漉是什么意思hcv8jop1ns7r.cn 肾不好吃什么好hcv8jop1ns8r.cn us检查是什么意思hcv8jop9ns6r.cn 脚后跟疼是什么原因引起的hcv8jop4ns8r.cn 经常射精有什么危害hcv9jop7ns1r.cn
回眸一笑百媚生什么意思hcv9jop2ns5r.cn 组织机构代码是什么hcv9jop6ns2r.cn 紧张手抖吃什么药hcv7jop7ns0r.cn 低压高吃什么药最有效hcv8jop7ns4r.cn 唇炎看什么科最好cl108k.com
百度