import sys
def most_frequent_words(filename, num_words=10):
"""Finds the most frequent words in a text file.
Args:
filename: The name of the file to process.
num_words: The number of most frequent words to return.
Returns:
A list of tuples, where each tuple contains a word and its frequency.
"""
word_counts = {}
with open(filename, 'r') as file:
for word in file.read().split():
word_counts[word] = word_counts.get(word, 0) + 1
sorted_word_counts = sorted(word_counts.items(), key=lambda x: x[1], reverse=True)
return sorted_word_counts[:num_words]
if
__name__ == "__main__":
if len(sys.argv) != 2:
print("Usage: python script.py <filename>")
sys.exit(1)
filename = sys.argv[1]
most_common_words = most_frequent_words(filename)
for word, count in most_common_words:
print(f"{word}: {count}")
```
**Explanation:**
1. **Create a dictionary:** Initialize an empty dictionary `word_counts` to store word frequencies.
2. **Iterate over words:** Read the file and iterate over each word.
3. **Update word counts:** For each word, check if it's already in the dictionary. If not, add it with a count of 1. Otherwise, increment its count.
4. **Sort word counts:** Sort the word count pairs by frequency in descending order.
5. **Return top `num_words`:** Return the top `num_words` frequent words.
**While this approach is a more manual implementation, it provides a deeper understanding of dictionary operations and sorting techniques.**
**Note:** For larger files and more complex text analysis, the `collections.Counter` class offers a more efficient and concise solution.