MediaWiki API result

This is the HTML representation of the JSON format. HTML is good for debugging, but is unsuitable for application use.

Specify the format parameter to change the output format. To see the non-HTML representation of the JSON format, set format=json.

See the complete documentation, or the API help for more information.

{
    "batchcomplete": "",
    "continue": {
        "gapcontinue": "Real_Estate",
        "continue": "gapcontinue||"
    },
    "warnings": {
        "main": {
            "*": "Subscribe to the mediawiki-api-announce mailing list at <https://lists.wikimedia.org/postorius/lists/mediawiki-api-announce.lists.wikimedia.org/> for notice of API deprecations and breaking changes."
        },
        "revisions": {
            "*": "Because \"rvslots\" was not specified, a legacy format has been used for the output. This format is deprecated, and in the future the new format will always be used."
        }
    },
    "query": {
        "pages": {
            "1213": {
                "pageid": 1213,
                "ns": 0,
                "title": "Re-ranking",
                "revisions": [
                    {
                        "contentformat": "text/x-wiki",
                        "contentmodel": "wikitext",
                        "*": "{{see also|Machine learning terms}}\n==Introduction==\nRe-ranking, also known as rank refinement or re-scoring, is an essential technique in [[machine learning]] that aims to improve the quality of ranked results generated by a primary ranking model. It involves using a secondary model to adjust the initial ranking produced by the primary model, based on various features and criteria. Re-ranking is widely applied in diverse fields, such as [[information retrieval]], [[natural language processing]], and [[recommender systems]].\n\n==Re-ranking Process==\nThe re-ranking process consists of two primary steps:\n\n===Primary Ranking===\nThe primary ranking model generates an initial ranking of items or results. This model can be based on different algorithms, such as [[support vector machines]], [[decision trees]], or [[neural networks]]. The primary ranking typically considers a limited number of features to produce a fast and efficient ranking.\n\n===Secondary Ranking===\nAfter obtaining the initial ranking, a secondary model is employed to refine and adjust the ranking. The secondary model may take into account additional features, context, or user preferences that were not considered by the primary model. This secondary model can be based on various techniques, including [[machine learning algorithms]], [[ensemble methods]], and [[deep learning]] architectures.\n\n==Re-ranking Applications==\nRe-ranking techniques have been applied to various domains, including:\n\n===Information Retrieval===\nIn [[information retrieval]], re-ranking is employed to refine search results generated by a primary ranking model. The primary model may generate an initial ranking based on keyword matching or other simple criteria. The secondary model then considers additional features, such as document relevance, user context, or query intent, to improve the quality and relevance of the search results.\n\n===Natural Language Processing===\nIn [[natural language processing]], re-ranking is used to improve the quality of outputs generated by language models, such as [[machine translation]] or [[text summarization]]. The primary model produces an initial set of candidate translations or summaries, while the secondary model adjusts the ranking based on criteria such as fluency, coherence, or content coverage.\n\n===Recommender Systems===\nRe-ranking is applied to [[recommender systems]] to refine the recommendations generated by a primary model. The primary model may produce an initial list of items based on user-item interactions, while the secondary model adjusts the ranking by incorporating additional features, such as item content, user demographics, or contextual information.\n\n==Explain Like I'm 5 (ELI5)==\nImagine you're at a toy store, and you ask the storekeeper to show you the best toys. The storekeeper quickly picks some toys based on what they think you might like. This is like the \"primary ranking\" in machine learning. Then, your mom comes over and looks at the toys the storekeeper picked. She knows you better and might consider other things like safety, quality, and your interests. So, she rearranges the toys to give you the best options for you. This is like the \"secondary ranking\" in machine learning. The whole process of your mom rearranging the toys to give you the best choices is called \"re-ranking.\"\n\n\n[[Category:Terms]] [[Category:Machine learning terms]] [[Category:Not Edited]] [[Category:updated]]"
                    }
                ]
            },
            "883": {
                "pageid": 883,
                "ns": 0,
                "title": "ReLU",
                "revisions": [
                    {
                        "contentformat": "text/x-wiki",
                        "contentmodel": "wikitext",
                        "*": "{{see also|Machine learning terms}}\n==ReLU in Machine Learning==\nReLU, or '''Rectified Linear Unit''', is a popular [[activation function]] used in [[artificial neural networks]] (ANNs) for implementing [[deep learning]] models. The primary role of an activation function is to introduce non-linearity in the model and improve its learning capability. ReLU has been widely adopted due to its simplicity, efficiency, and ability to mitigate the [[vanishing gradient problem]].\n\n===Definition===\nThe ReLU function is mathematically defined as:\n\n<math>f(x) = max(0, x)</math>\n\nWhere ''x'' represents the input to the function. The output of the function is the maximum value between 0 and the input value. Consequently, ReLU is a piecewise linear function, with positive input values left unchanged and negative input values set to zero. This simple formulation leads to its computational efficiency and easy implementation in machine learning algorithms.\n\n===Properties===\nReLU possesses several properties that make it a popular choice for activation functions in deep learning models:\n\n* '''Non-linearity:''' ReLU introduces non-linearity in the model, allowing it to learn complex and non-linear relationships between inputs and outputs.\n* '''Sparse activation:''' Due to its nature, ReLU results in sparse activation of neurons, meaning only a subset of neurons are activated at any given time. This property can lead to improved efficiency and a reduction in overfitting.\n* '''Computational efficiency:''' The simplicity of ReLU's mathematical definition ensures that it is computationally efficient, allowing for faster training and reduced resource consumption in comparison to other activation functions.\n* '''Mitigation of the vanishing gradient problem:''' ReLU helps alleviate the vanishing gradient problem, which can occur in deep learning models when gradients become too small to effectively propagate through the network during backpropagation.\n\nHowever, ReLU also has some drawbacks, such as the [[dying ReLU]] problem, where certain neurons become inactive and cease to contribute to the learning process. This issue has led to the development of alternative activation functions, such as [[Leaky ReLU]] and [[Parametric ReLU]].\n\n==Explain Like I'm 5 (ELI5)==\nImagine you're trying to learn a new skill, like playing soccer. Your brain has to figure out which moves work well and which don't. In machine learning, a similar process happens when a computer tries to learn something new. The ReLU function helps the computer decide which parts of its \"brain\" to use for learning. When the computer finds something important, the ReLU function keeps it. When it finds something unimportant or bad, it sets it to zero. This way, the computer can learn more efficiently and figure out the best way to complete a task.\n\n[[Category:Terms]] [[Category:Machine learning terms]] [[Category:Not Edited]] [[Category:updated]]"
                    }
                ]
            }
        }
    }
}